Test Report: Docker_Linux_containerd 14995

                    
                      411d4579fd248fd57a4259437564c3e08f354535:2022-09-21:25810
                    
                

Test fail (16/266)

x
+
TestKubernetesUpgrade (556.86s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20220921215522-10174 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20220921215522-10174 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (45.93352882s)
version_upgrade_test.go:234: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-20220921215522-10174
version_upgrade_test.go:234: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-20220921215522-10174: (1.451142512s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-20220921215522-10174 status --format={{.Host}}
version_upgrade_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-20220921215522-10174 status --format={{.Host}}: exit status 7 (116.677508ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:241: status error: exit status 7 (may be ok)
version_upgrade_test.go:250: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20220921215522-10174 --memory=2200 --kubernetes-version=v1.25.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:250: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-20220921215522-10174 --memory=2200 --kubernetes-version=v1.25.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: exit status 109 (8m25.341134077s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20220921215522-10174] minikube v1.27.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14995
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on existing profile
	* Starting control plane node kubernetes-upgrade-20220921215522-10174 in cluster kubernetes-upgrade-20220921215522-10174
	* Pulling base image ...
	* Restarting existing docker container for "kubernetes-upgrade-20220921215522-10174" ...
	* Preparing Kubernetes v1.25.2 on containerd 1.6.8 ...
	  - kubelet.cni-conf-dir=/etc/cni/net.mk
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	X Problems detected in kubelet:
	  Sep 21 22:03:45 kubernetes-upgrade-20220921215522-10174 kubelet[12183]: E0921 22:03:45.374524   12183 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Sep 21 22:03:46 kubernetes-upgrade-20220921215522-10174 kubelet[12194]: E0921 22:03:46.123389   12194 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Sep 21 22:03:46 kubernetes-upgrade-20220921215522-10174 kubelet[12205]: E0921 22:03:46.873433   12205 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0921 21:56:10.331340  163433 out.go:296] Setting OutFile to fd 1 ...
	I0921 21:56:10.331470  163433 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 21:56:10.331485  163433 out.go:309] Setting ErrFile to fd 2...
	I0921 21:56:10.331492  163433 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 21:56:10.331667  163433 root.go:333] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/bin
	I0921 21:56:10.332402  163433 out.go:303] Setting JSON to false
	I0921 21:56:10.334048  163433 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":2321,"bootTime":1663795049,"procs":549,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1017-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0921 21:56:10.334125  163433 start.go:125] virtualization: kvm guest
	I0921 21:56:10.335968  163433 out.go:177] * [kubernetes-upgrade-20220921215522-10174] minikube v1.27.0 on Ubuntu 20.04 (kvm/amd64)
	I0921 21:56:10.337813  163433 out.go:177]   - MINIKUBE_LOCATION=14995
	I0921 21:56:10.337743  163433 notify.go:214] Checking for updates...
	I0921 21:56:10.340552  163433 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0921 21:56:10.341844  163433 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
	I0921 21:56:10.343195  163433 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube
	I0921 21:56:10.344671  163433 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0921 21:56:10.346603  163433 config.go:180] Loaded profile config "kubernetes-upgrade-20220921215522-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0921 21:56:10.347270  163433 driver.go:365] Setting default libvirt URI to qemu:///system
	I0921 21:56:10.377224  163433 docker.go:137] docker version: linux-20.10.18
	I0921 21:56:10.377364  163433 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 21:56:10.487660  163433 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:81 OomKillDisable:true NGoroutines:78 SystemTime:2022-09-21 21:56:10.404570965 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1017-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.18 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0921 21:56:10.487806  163433 docker.go:254] overlay module found
	I0921 21:56:10.490036  163433 out.go:177] * Using the docker driver based on existing profile
	I0921 21:56:10.491262  163433 start.go:284] selected driver: docker
	I0921 21:56:10.491291  163433 start.go:808] validating driver "docker" against &{Name:kubernetes-upgrade-20220921215522-10174 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-20220921215522-10174 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 21:56:10.491429  163433 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0921 21:56:10.492670  163433 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 21:56:10.594538  163433 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:81 OomKillDisable:true NGoroutines:78 SystemTime:2022-09-21 21:56:10.521340444 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1017-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.18 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0921 21:56:10.594910  163433 cni.go:95] Creating CNI manager for ""
	I0921 21:56:10.594926  163433 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0921 21:56:10.594985  163433 start_flags.go:316] config:
	{Name:kubernetes-upgrade-20220921215522-10174 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:kubernetes-upgrade-20220921215522-10174 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 21:56:10.597705  163433 out.go:177] * Starting control plane node kubernetes-upgrade-20220921215522-10174 in cluster kubernetes-upgrade-20220921215522-10174
	I0921 21:56:10.599045  163433 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0921 21:56:10.600331  163433 out.go:177] * Pulling base image ...
	I0921 21:56:10.601558  163433 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime containerd
	I0921 21:56:10.601605  163433 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.2-containerd-overlay2-amd64.tar.lz4
	I0921 21:56:10.601617  163433 cache.go:57] Caching tarball of preloaded images
	I0921 21:56:10.601663  163433 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	I0921 21:56:10.601879  163433 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0921 21:56:10.601898  163433 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.2 on containerd
	I0921 21:56:10.602026  163433 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/kubernetes-upgrade-20220921215522-10174/config.json ...
	I0921 21:56:10.630294  163433 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon, skipping pull
	I0921 21:56:10.630320  163433 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c exists in daemon, skipping load
	I0921 21:56:10.630329  163433 cache.go:208] Successfully downloaded all kic artifacts
	I0921 21:56:10.630367  163433 start.go:364] acquiring machines lock for kubernetes-upgrade-20220921215522-10174: {Name:mk20cd57c93d40829efc7af906ff33505be4f28e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 21:56:10.630503  163433 start.go:368] acquired machines lock for "kubernetes-upgrade-20220921215522-10174" in 79.587µs
	I0921 21:56:10.630528  163433 start.go:96] Skipping create...Using existing machine configuration
	I0921 21:56:10.630533  163433 fix.go:55] fixHost starting: 
	I0921 21:56:10.630793  163433 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220921215522-10174 --format={{.State.Status}}
	I0921 21:56:10.652513  163433 fix.go:103] recreateIfNeeded on kubernetes-upgrade-20220921215522-10174: state=Stopped err=<nil>
	W0921 21:56:10.652557  163433 fix.go:129] unexpected machine state, will restart: <nil>
	I0921 21:56:10.654643  163433 out.go:177] * Restarting existing docker container for "kubernetes-upgrade-20220921215522-10174" ...
	I0921 21:56:10.655913  163433 cli_runner.go:164] Run: docker start kubernetes-upgrade-20220921215522-10174
	I0921 21:56:11.033379  163433 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220921215522-10174 --format={{.State.Status}}
	I0921 21:56:11.061699  163433 kic.go:415] container "kubernetes-upgrade-20220921215522-10174" state is running.
	I0921 21:56:11.062108  163433 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20220921215522-10174
	I0921 21:56:11.089465  163433 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/kubernetes-upgrade-20220921215522-10174/config.json ...
	I0921 21:56:11.089701  163433 machine.go:88] provisioning docker machine ...
	I0921 21:56:11.089740  163433 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-20220921215522-10174"
	I0921 21:56:11.089791  163433 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220921215522-10174
	I0921 21:56:11.118858  163433 main.go:134] libmachine: Using SSH client type: native
	I0921 21:56:11.119079  163433 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ec520] 0x7ef6a0 <nil>  [] 0s} 127.0.0.1 49344 <nil> <nil>}
	I0921 21:56:11.119101  163433 main.go:134] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-20220921215522-10174 && echo "kubernetes-upgrade-20220921215522-10174" | sudo tee /etc/hostname
	I0921 21:56:11.119698  163433 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59362->127.0.0.1:49344: read: connection reset by peer
	I0921 21:56:14.259786  163433 main.go:134] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-20220921215522-10174
	
	I0921 21:56:14.259886  163433 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220921215522-10174
	I0921 21:56:14.282895  163433 main.go:134] libmachine: Using SSH client type: native
	I0921 21:56:14.283036  163433 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ec520] 0x7ef6a0 <nil>  [] 0s} 127.0.0.1 49344 <nil> <nil>}
	I0921 21:56:14.283059  163433 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-20220921215522-10174' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-20220921215522-10174/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-20220921215522-10174' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0921 21:56:14.415904  163433 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0921 21:56:14.415930  163433 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube}
	I0921 21:56:14.415973  163433 ubuntu.go:177] setting up certificates
	I0921 21:56:14.415983  163433 provision.go:83] configureAuth start
	I0921 21:56:14.416036  163433 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20220921215522-10174
	I0921 21:56:14.446504  163433 provision.go:138] copyHostCerts
	I0921 21:56:14.446571  163433 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/key.pem, removing ...
	I0921 21:56:14.446594  163433 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/key.pem
	I0921 21:56:14.446661  163433 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/key.pem (1675 bytes)
	I0921 21:56:14.446784  163433 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.pem, removing ...
	I0921 21:56:14.446805  163433 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.pem
	I0921 21:56:14.446843  163433 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.pem (1078 bytes)
	I0921 21:56:14.446930  163433 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cert.pem, removing ...
	I0921 21:56:14.446943  163433 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cert.pem
	I0921 21:56:14.446966  163433 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cert.pem (1123 bytes)
	I0921 21:56:14.447045  163433 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-20220921215522-10174 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-20220921215522-10174]
	I0921 21:56:14.758339  163433 provision.go:172] copyRemoteCerts
	I0921 21:56:14.758470  163433 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0921 21:56:14.758518  163433 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220921215522-10174
	I0921 21:56:14.784838  163433 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49344 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/kubernetes-upgrade-20220921215522-10174/id_rsa Username:docker}
	I0921 21:56:14.886688  163433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0921 21:56:14.907362  163433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server.pem --> /etc/docker/server.pem (1289 bytes)
	I0921 21:56:14.926571  163433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0921 21:56:14.943628  163433 provision.go:86] duration metric: configureAuth took 527.622441ms
	I0921 21:56:14.943658  163433 ubuntu.go:193] setting minikube options for container-runtime
	I0921 21:56:14.943854  163433 config.go:180] Loaded profile config "kubernetes-upgrade-20220921215522-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.2
	I0921 21:56:14.943870  163433 machine.go:91] provisioned docker machine in 3.854154196s
	I0921 21:56:14.943876  163433 start.go:300] post-start starting for "kubernetes-upgrade-20220921215522-10174" (driver="docker")
	I0921 21:56:14.943883  163433 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0921 21:56:14.943919  163433 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0921 21:56:14.943951  163433 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220921215522-10174
	I0921 21:56:14.969386  163433 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49344 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/kubernetes-upgrade-20220921215522-10174/id_rsa Username:docker}
	I0921 21:56:15.062984  163433 ssh_runner.go:195] Run: cat /etc/os-release
	I0921 21:56:15.065773  163433 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0921 21:56:15.065796  163433 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0921 21:56:15.065804  163433 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0921 21:56:15.065810  163433 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0921 21:56:15.065822  163433 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/addons for local assets ...
	I0921 21:56:15.065871  163433 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files for local assets ...
	I0921 21:56:15.065935  163433 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/101742.pem -> 101742.pem in /etc/ssl/certs
	I0921 21:56:15.066011  163433 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0921 21:56:15.072296  163433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/101742.pem --> /etc/ssl/certs/101742.pem (1708 bytes)
	I0921 21:56:15.088672  163433 start.go:303] post-start completed in 144.782833ms
	I0921 21:56:15.088749  163433 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 21:56:15.088797  163433 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220921215522-10174
	I0921 21:56:15.114151  163433 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49344 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/kubernetes-upgrade-20220921215522-10174/id_rsa Username:docker}
	I0921 21:56:15.204163  163433 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 21:56:15.208064  163433 fix.go:57] fixHost completed within 4.577524765s
	I0921 21:56:15.208089  163433 start.go:83] releasing machines lock for "kubernetes-upgrade-20220921215522-10174", held for 4.577567952s
	I0921 21:56:15.208196  163433 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20220921215522-10174
	I0921 21:56:15.231039  163433 ssh_runner.go:195] Run: systemctl --version
	I0921 21:56:15.231096  163433 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220921215522-10174
	I0921 21:56:15.231129  163433 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0921 21:56:15.231189  163433 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220921215522-10174
	I0921 21:56:15.258580  163433 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49344 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/kubernetes-upgrade-20220921215522-10174/id_rsa Username:docker}
	I0921 21:56:15.260147  163433 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49344 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/kubernetes-upgrade-20220921215522-10174/id_rsa Username:docker}
	I0921 21:56:15.379852  163433 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0921 21:56:15.392076  163433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0921 21:56:15.401772  163433 docker.go:188] disabling docker service ...
	I0921 21:56:15.401815  163433 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0921 21:56:15.412274  163433 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0921 21:56:15.421438  163433 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0921 21:56:15.492628  163433 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0921 21:56:15.580773  163433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0921 21:56:15.590130  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0921 21:56:15.602646  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "registry.k8s.io/pause:3.8"|' -i /etc/containerd/config.toml"
	I0921 21:56:15.610097  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0921 21:56:15.618079  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0921 21:56:15.627516  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0921 21:56:15.635486  163433 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0921 21:56:15.642158  163433 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0921 21:56:15.649321  163433 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0921 21:56:15.735660  163433 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0921 21:56:15.845934  163433 start.go:450] Will wait 60s for socket path /run/containerd/containerd.sock
	I0921 21:56:15.846008  163433 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0921 21:56:15.849816  163433 start.go:471] Will wait 60s for crictl version
	I0921 21:56:15.849880  163433 ssh_runner.go:195] Run: sudo crictl version
	I0921 21:56:15.892953  163433 start.go:480] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.8
	RuntimeApiVersion:  v1alpha2
	I0921 21:56:15.893038  163433 ssh_runner.go:195] Run: containerd --version
	I0921 21:56:15.925993  163433 ssh_runner.go:195] Run: containerd --version
	I0921 21:56:15.961048  163433 out.go:177] * Preparing Kubernetes v1.25.2 on containerd 1.6.8 ...
	I0921 21:56:15.962324  163433 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-20220921215522-10174 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 21:56:15.985812  163433 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0921 21:56:15.989027  163433 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0921 21:56:16.000091  163433 out.go:177]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0921 21:56:16.001787  163433 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime containerd
	I0921 21:56:16.001873  163433 ssh_runner.go:195] Run: sudo crictl images --output json
	I0921 21:56:16.028632  163433 containerd.go:549] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.25.2". assuming images are not preloaded.
	I0921 21:56:16.028687  163433 ssh_runner.go:195] Run: which lz4
	I0921 21:56:16.032046  163433 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0921 21:56:16.035143  163433 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0921 21:56:16.035176  163433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.2-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (426272118 bytes)
	I0921 21:56:17.067812  163433 containerd.go:496] Took 1.035797 seconds to copy over tarball
	I0921 21:56:17.067906  163433 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0921 21:56:19.516702  163433 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.448764257s)
	I0921 21:56:19.516735  163433 containerd.go:503] Took 2.448890 seconds t extract the tarball
	I0921 21:56:19.516748  163433 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0921 21:56:20.318669  163433 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0921 21:56:20.392284  163433 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0921 21:56:20.605112  163433 ssh_runner.go:195] Run: sudo crictl images --output json
	I0921 21:56:20.630679  163433 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.25.2 registry.k8s.io/kube-controller-manager:v1.25.2 registry.k8s.io/kube-scheduler:v1.25.2 registry.k8s.io/kube-proxy:v1.25.2 registry.k8s.io/pause:3.8 registry.k8s.io/etcd:3.5.4-0 registry.k8s.io/coredns/coredns:v1.9.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0921 21:56:20.630734  163433 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0921 21:56:20.630773  163433 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.25.2
	I0921 21:56:20.630808  163433 image.go:134] retrieving image: registry.k8s.io/pause:3.8
	I0921 21:56:20.630818  163433 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.25.2
	I0921 21:56:20.630837  163433 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.25.2
	I0921 21:56:20.630920  163433 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.9.3
	I0921 21:56:20.630934  163433 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.25.2
	I0921 21:56:20.630783  163433 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.4-0
	I0921 21:56:20.631918  163433 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.25.2: Error: No such image: registry.k8s.io/kube-proxy:v1.25.2
	I0921 21:56:20.631944  163433 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.25.2: Error: No such image: registry.k8s.io/kube-apiserver:v1.25.2
	I0921 21:56:20.631963  163433 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.25.2: Error: No such image: registry.k8s.io/kube-scheduler:v1.25.2
	I0921 21:56:20.631983  163433 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.4-0: Error: No such image: registry.k8s.io/etcd:3.5.4-0
	I0921 21:56:20.631963  163433 image.go:177] daemon lookup for registry.k8s.io/pause:3.8: Error: No such image: registry.k8s.io/pause:3.8
	I0921 21:56:20.632025  163433 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.25.2: Error: No such image: registry.k8s.io/kube-controller-manager:v1.25.2
	I0921 21:56:20.632069  163433 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.9.3: Error: No such image: registry.k8s.io/coredns/coredns:v1.9.3
	I0921 21:56:20.632141  163433 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0921 21:56:21.204916  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/etcd:3.5.4-0"
	I0921 21:56:21.208786  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-scheduler:v1.25.2"
	I0921 21:56:21.208993  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/pause:3.8"
	I0921 21:56:21.211741  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/coredns/coredns:v1.9.3"
	I0921 21:56:21.220098  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-apiserver:v1.25.2"
	I0921 21:56:21.229445  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-proxy:v1.25.2"
	I0921 21:56:21.263606  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-controller-manager:v1.25.2"
	I0921 21:56:21.481454  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep gcr.io/k8s-minikube/storage-provisioner:v5"
	I0921 21:56:22.085332  163433 cache_images.go:116] "registry.k8s.io/etcd:3.5.4-0" needs transfer: "registry.k8s.io/etcd:3.5.4-0" does not exist at hash "a8a176a5d5d698f9409dc246f81fa69d37d4a2f4132ba5e62e72a78476b27f66" in container runtime
	I0921 21:56:22.085388  163433 cri.go:216] Removing image: registry.k8s.io/etcd:3.5.4-0
	I0921 21:56:22.085446  163433 ssh_runner.go:195] Run: which crictl
	I0921 21:56:22.085458  163433 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.25.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.25.2" does not exist at hash "ca0ea1ee3cfd3d1ced15a8e6f4a236a436c5733b20a0b2dbbfbfd59977e12959" in container runtime
	I0921 21:56:22.085498  163433 cri.go:216] Removing image: registry.k8s.io/kube-scheduler:v1.25.2
	I0921 21:56:22.085542  163433 ssh_runner.go:195] Run: which crictl
	I0921 21:56:22.088947  163433 cache_images.go:116] "registry.k8s.io/pause:3.8" needs transfer: "registry.k8s.io/pause:3.8" does not exist at hash "4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517" in container runtime
	I0921 21:56:22.088988  163433 cri.go:216] Removing image: registry.k8s.io/pause:3.8
	I0921 21:56:22.089022  163433 ssh_runner.go:195] Run: which crictl
	I0921 21:56:22.089129  163433 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.9.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.9.3" does not exist at hash "5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a" in container runtime
	I0921 21:56:22.089157  163433 cri.go:216] Removing image: registry.k8s.io/coredns/coredns:v1.9.3
	I0921 21:56:22.089184  163433 ssh_runner.go:195] Run: which crictl
	I0921 21:56:22.105533  163433 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.25.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.25.2" does not exist at hash "97801f83949087fbdcc09b1c84ddda0ed5d01f4aabd17787a7714eb2796082b3" in container runtime
	I0921 21:56:22.105578  163433 cri.go:216] Removing image: registry.k8s.io/kube-apiserver:v1.25.2
	I0921 21:56:22.105628  163433 ssh_runner.go:195] Run: which crictl
	I0921 21:56:22.176346  163433 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.25.2" needs transfer: "registry.k8s.io/kube-proxy:v1.25.2" does not exist at hash "1c7d8c51823b5eb08189d553d911097ec8a6a40fea40bb5bdea91842f30d2e86" in container runtime
	I0921 21:56:22.176404  163433 cri.go:216] Removing image: registry.k8s.io/kube-proxy:v1.25.2
	I0921 21:56:22.176458  163433 ssh_runner.go:195] Run: which crictl
	I0921 21:56:22.190226  163433 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.25.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.25.2" does not exist at hash "dbfceb93c69b6d85661fe46c3e50de9e927e4895ebba2892a1db116e69c81890" in container runtime
	I0921 21:56:22.190283  163433 cri.go:216] Removing image: registry.k8s.io/kube-controller-manager:v1.25.2
	I0921 21:56:22.190325  163433 ssh_runner.go:195] Run: which crictl
	I0921 21:56:22.212989  163433 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0921 21:56:22.213034  163433 cri.go:216] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0921 21:56:22.213075  163433 ssh_runner.go:195] Run: which crictl
	I0921 21:56:22.213125  163433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.4-0
	I0921 21:56:22.213204  163433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.25.2
	I0921 21:56:22.213240  163433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.8
	I0921 21:56:22.213287  163433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.9.3
	I0921 21:56:22.213299  163433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.25.2
	I0921 21:56:22.213370  163433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.25.2
	I0921 21:56:22.213419  163433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.25.2
	I0921 21:56:24.570622  163433 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.4-0: (2.357457258s)
	I0921 21:56:24.570659  163433 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.4-0
	I0921 21:56:24.570715  163433 ssh_runner.go:235] Completed: which crictl: (2.357624166s)
	I0921 21:56:24.570739  163433 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.4-0
	I0921 21:56:24.570753  163433 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0921 21:56:24.574767  163433 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.25.2: (2.361444016s)
	I0921 21:56:24.574799  163433 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.25.2
	I0921 21:56:24.574830  163433 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.25.2: (2.361542546s)
	I0921 21:56:24.574853  163433 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.25.2: (2.361455016s)
	I0921 21:56:24.574868  163433 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.25.2
	I0921 21:56:24.574875  163433 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.25.2
	I0921 21:56:24.574889  163433 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.25.2
	I0921 21:56:24.574893  163433 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.8: (2.361630108s)
	I0921 21:56:24.574902  163433 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/pause_3.8
	I0921 21:56:24.574944  163433 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.25.2
	I0921 21:56:24.574945  163433 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.25.2
	I0921 21:56:24.574957  163433 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.8
	I0921 21:56:24.574961  163433 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.9.3: (2.361653306s)
	I0921 21:56:24.574972  163433 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.9.3
	I0921 21:56:24.574996  163433 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.25.2: (2.361558149s)
	I0921 21:56:24.575004  163433 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.25.2
	I0921 21:56:24.575016  163433 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.9.3
	I0921 21:56:24.575048  163433 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.25.2
	I0921 21:56:24.611381  163433 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/etcd_3.5.4-0': No such file or directory
	I0921 21:56:24.611407  163433 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0921 21:56:24.611424  163433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.4-0 --> /var/lib/minikube/images/etcd_3.5.4-0 (102160384 bytes)
	I0921 21:56:24.611447  163433 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.25.2: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.25.2: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-scheduler_v1.25.2': No such file or directory
	I0921 21:56:24.611466  163433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.25.2 --> /var/lib/minikube/images/kube-scheduler_v1.25.2 (15798784 bytes)
	I0921 21:56:24.611506  163433 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0921 21:56:24.611530  163433 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.9.3: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.9.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/coredns_v1.9.3': No such file or directory
	I0921 21:56:24.611564  163433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.9.3 --> /var/lib/minikube/images/coredns_v1.9.3 (14839296 bytes)
	I0921 21:56:24.611579  163433 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.8: stat -c "%s %y" /var/lib/minikube/images/pause_3.8: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/pause_3.8': No such file or directory
	I0921 21:56:24.611607  163433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/pause_3.8 --> /var/lib/minikube/images/pause_3.8 (311296 bytes)
	I0921 21:56:24.611669  163433 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.25.2: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.25.2: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-proxy_v1.25.2': No such file or directory
	I0921 21:56:24.611695  163433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.25.2 --> /var/lib/minikube/images/kube-proxy_v1.25.2 (20265472 bytes)
	I0921 21:56:24.611708  163433 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.25.2: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.25.2: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-apiserver_v1.25.2': No such file or directory
	I0921 21:56:24.611765  163433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.25.2 --> /var/lib/minikube/images/kube-apiserver_v1.25.2 (34238464 bytes)
	I0921 21:56:24.611773  163433 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.25.2: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.25.2: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-controller-manager_v1.25.2': No such file or directory
	I0921 21:56:24.611793  163433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.25.2 --> /var/lib/minikube/images/kube-controller-manager_v1.25.2 (31264256 bytes)
	I0921 21:56:24.622202  163433 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0921 21:56:24.622244  163433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I0921 21:56:24.682710  163433 containerd.go:233] Loading image: /var/lib/minikube/images/pause_3.8
	I0921 21:56:24.682788  163433 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.8
	I0921 21:56:24.937271  163433 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/pause_3.8 from cache
	I0921 21:56:24.937314  163433 containerd.go:233] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0921 21:56:24.937357  163433 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I0921 21:56:25.588194  163433 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0921 21:56:25.588255  163433 containerd.go:233] Loading image: /var/lib/minikube/images/kube-scheduler_v1.25.2
	I0921 21:56:25.588306  163433 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.25.2
	I0921 21:56:26.465593  163433 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.25.2 from cache
	I0921 21:56:26.465646  163433 containerd.go:233] Loading image: /var/lib/minikube/images/coredns_v1.9.3
	I0921 21:56:26.465706  163433 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.9.3
	I0921 21:56:27.136655  163433 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.9.3 from cache
	I0921 21:56:27.136701  163433 containerd.go:233] Loading image: /var/lib/minikube/images/kube-proxy_v1.25.2
	I0921 21:56:27.136748  163433 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.25.2
	I0921 21:56:27.776645  163433 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.25.2 from cache
	I0921 21:56:27.776692  163433 containerd.go:233] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.25.2
	I0921 21:56:27.776740  163433 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.25.2
	I0921 21:56:29.653312  163433 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.25.2: (1.876522782s)
	I0921 21:56:29.653352  163433 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.25.2 from cache
	I0921 21:56:29.653387  163433 containerd.go:233] Loading image: /var/lib/minikube/images/kube-apiserver_v1.25.2
	I0921 21:56:29.653443  163433 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.25.2
	I0921 21:56:32.228396  163433 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.25.2: (2.574920147s)
	I0921 21:56:32.228432  163433 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.25.2 from cache
	I0921 21:56:32.228463  163433 containerd.go:233] Loading image: /var/lib/minikube/images/etcd_3.5.4-0
	I0921 21:56:32.228507  163433 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.4-0
	I0921 21:56:37.285785  163433 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.4-0: (5.057244447s)
	I0921 21:56:37.285819  163433 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.4-0 from cache
	I0921 21:56:37.285864  163433 cache_images.go:123] Successfully loaded all cached images
	I0921 21:56:37.285878  163433 cache_images.go:92] LoadImages completed in 16.655174825s
	I0921 21:56:37.285931  163433 ssh_runner.go:195] Run: sudo crictl info
	I0921 21:56:37.309301  163433 cni.go:95] Creating CNI manager for ""
	I0921 21:56:37.309329  163433 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0921 21:56:37.309344  163433 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0921 21:56:37.309360  163433 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.25.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-20220921215522-10174 NodeName:kubernetes-upgrade-20220921215522-10174 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFi
le:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
	I0921 21:56:37.309517  163433 kubeadm.go:161] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "kubernetes-upgrade-20220921215522-10174"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0921 21:56:37.309616  163433 kubeadm.go:962] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=kubernetes-upgrade-20220921215522-10174 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.2 ClusterName:kubernetes-upgrade-20220921215522-10174 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0921 21:56:37.309675  163433 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.2
	I0921 21:56:37.316733  163433 binaries.go:44] Found k8s binaries, skipping transfer
	I0921 21:56:37.316807  163433 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0921 21:56:37.323261  163433 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (563 bytes)
	I0921 21:56:37.335521  163433 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0921 21:56:37.348505  163433 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2068 bytes)
	I0921 21:56:37.361258  163433 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0921 21:56:37.364268  163433 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0921 21:56:37.431032  163433 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/kubernetes-upgrade-20220921215522-10174 for IP: 192.168.67.2
	I0921 21:56:37.431171  163433 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.key
	I0921 21:56:37.431227  163433 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/proxy-client-ca.key
	I0921 21:56:37.431327  163433 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/kubernetes-upgrade-20220921215522-10174/client.key
	I0921 21:56:37.431423  163433 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/kubernetes-upgrade-20220921215522-10174/apiserver.key.c7fa3a9e
	I0921 21:56:37.431484  163433 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/kubernetes-upgrade-20220921215522-10174/proxy-client.key
	I0921 21:56:37.431625  163433 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/10174.pem (1338 bytes)
	W0921 21:56:37.431679  163433 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/10174_empty.pem, impossibly tiny 0 bytes
	I0921 21:56:37.431692  163433 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca-key.pem (1679 bytes)
	I0921 21:56:37.431738  163433 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem (1078 bytes)
	I0921 21:56:37.431770  163433 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/cert.pem (1123 bytes)
	I0921 21:56:37.431797  163433 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/key.pem (1675 bytes)
	I0921 21:56:37.431839  163433 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/101742.pem (1708 bytes)
	I0921 21:56:37.432438  163433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/kubernetes-upgrade-20220921215522-10174/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0921 21:56:37.452586  163433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/kubernetes-upgrade-20220921215522-10174/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0921 21:56:37.470430  163433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/kubernetes-upgrade-20220921215522-10174/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0921 21:56:37.490494  163433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/kubernetes-upgrade-20220921215522-10174/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0921 21:56:37.508293  163433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0921 21:56:37.525537  163433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0921 21:56:37.542461  163433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0921 21:56:37.558844  163433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0921 21:56:37.575269  163433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/10174.pem --> /usr/share/ca-certificates/10174.pem (1338 bytes)
	I0921 21:56:37.591971  163433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/101742.pem --> /usr/share/ca-certificates/101742.pem (1708 bytes)
	I0921 21:56:37.609047  163433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0921 21:56:37.626087  163433 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0921 21:56:37.638310  163433 ssh_runner.go:195] Run: openssl version
	I0921 21:56:37.642981  163433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10174.pem && ln -fs /usr/share/ca-certificates/10174.pem /etc/ssl/certs/10174.pem"
	I0921 21:56:37.649948  163433 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10174.pem
	I0921 21:56:37.652903  163433 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Sep 21 21:32 /usr/share/ca-certificates/10174.pem
	I0921 21:56:37.652945  163433 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10174.pem
	I0921 21:56:37.657558  163433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10174.pem /etc/ssl/certs/51391683.0"
	I0921 21:56:37.663908  163433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/101742.pem && ln -fs /usr/share/ca-certificates/101742.pem /etc/ssl/certs/101742.pem"
	I0921 21:56:37.670847  163433 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/101742.pem
	I0921 21:56:37.673590  163433 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Sep 21 21:32 /usr/share/ca-certificates/101742.pem
	I0921 21:56:37.673627  163433 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/101742.pem
	I0921 21:56:37.678260  163433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/101742.pem /etc/ssl/certs/3ec20f2e.0"
	I0921 21:56:37.684643  163433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0921 21:56:37.691591  163433 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0921 21:56:37.694518  163433 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Sep 21 21:28 /usr/share/ca-certificates/minikubeCA.pem
	I0921 21:56:37.694560  163433 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0921 21:56:37.699047  163433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0921 21:56:37.705399  163433 kubeadm.go:396] StartCluster: {Name:kubernetes-upgrade-20220921215522-10174 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:kubernetes-upgrade-20220921215522-10174 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 21:56:37.705510  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0921 21:56:37.705541  163433 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0921 21:56:37.728701  163433 cri.go:87] found id: ""
	I0921 21:56:37.728761  163433 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0921 21:56:37.735360  163433 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I0921 21:56:37.735382  163433 kubeadm.go:627] restartCluster start
	I0921 21:56:37.735414  163433 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0921 21:56:37.742060  163433 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0921 21:56:37.742793  163433 kubeconfig.go:116] verify returned: extract IP: "kubernetes-upgrade-20220921215522-10174" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
	I0921 21:56:37.743179  163433 kubeconfig.go:127] "kubernetes-upgrade-20220921215522-10174" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig - will repair!
	I0921 21:56:37.743831  163433 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig: {Name:mkdec5faf1bb182b50a3cd5458b352f38075dc15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 21:56:37.773145  163433 kapi.go:59] client config for kubernetes-upgrade-20220921215522-10174: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/kubernetes-upgrade-20220921215522-10174/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikub
e/profiles/kubernetes-upgrade-20220921215522-10174/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x177c400), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0921 21:56:37.773709  163433 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0921 21:56:37.781114  163433 kubeadm.go:594] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2022-09-21 21:55:39.346078055 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2022-09-21 21:56:37.357931182 +0000
	@@ -1,4 +1,4 @@
	-apiVersion: kubeadm.k8s.io/v1beta1
	+apiVersion: kubeadm.k8s.io/v1beta3
	 kind: InitConfiguration
	 localAPIEndpoint:
	   advertiseAddress: 192.168.67.2
	@@ -17,7 +17,7 @@
	     node-ip: 192.168.67.2
	   taints: []
	 ---
	-apiVersion: kubeadm.k8s.io/v1beta1
	+apiVersion: kubeadm.k8s.io/v1beta3
	 kind: ClusterConfiguration
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	@@ -31,16 +31,14 @@
	   extraArgs:
	     leader-elect: "false"
	 certificatesDir: /var/lib/minikube/certs
	-clusterName: kubernetes-upgrade-20220921215522-10174
	+clusterName: mk
	 controlPlaneEndpoint: control-plane.minikube.internal:8443
	-dns:
	-  type: CoreDNS
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	     extraArgs:
	-      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.67.2:2381
	-kubernetesVersion: v1.16.0
	+      proxy-refresh-interval: "70000"
	+kubernetesVersion: v1.25.2
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I0921 21:56:37.781133  163433 kubeadm.go:1114] stopping kube-system containers ...
	I0921 21:56:37.781144  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0921 21:56:37.781189  163433 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0921 21:56:37.806824  163433 cri.go:87] found id: ""
	I0921 21:56:37.806889  163433 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0921 21:56:37.816568  163433 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0921 21:56:37.823171  163433 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5763 Sep 21 21:55 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5795 Sep 21 21:55 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5959 Sep 21 21:55 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5743 Sep 21 21:55 /etc/kubernetes/scheduler.conf
	
	I0921 21:56:37.823215  163433 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0921 21:56:37.829655  163433 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0921 21:56:37.836313  163433 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0921 21:56:37.843147  163433 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0921 21:56:37.849644  163433 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0921 21:56:37.868031  163433 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0921 21:56:37.868055  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0921 21:56:37.914056  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0921 21:56:38.509391  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0921 21:56:38.720801  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0921 21:56:38.776684  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0921 21:56:38.821473  163433 api_server.go:51] waiting for apiserver process to appear ...
	I0921 21:56:38.821541  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:56:39.363418  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:56:39.863350  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:56:40.362776  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:56:40.862989  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:56:41.363110  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:56:41.864070  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:56:42.363613  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:56:42.863166  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:56:43.363538  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:56:43.863271  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:56:44.363689  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:56:44.863710  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:56:45.363450  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:56:45.863207  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:56:46.363393  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:56:46.863017  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:56:47.362827  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:56:47.862843  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:56:48.363012  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:56:48.862984  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:56:49.363214  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:56:49.862940  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:56:50.363406  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:56:50.863307  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:56:51.363226  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:56:51.863752  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:56:52.363563  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:56:52.863767  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:56:53.362794  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:56:53.863045  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:56:54.363693  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:56:54.863784  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:56:55.363193  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:56:55.863562  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:56:56.363072  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:56:56.863673  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:56:57.363301  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:56:57.862802  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:56:58.363661  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:56:58.862859  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:56:59.363013  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:56:59.863552  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:57:00.363736  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:57:00.863689  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:57:01.362901  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:57:01.863692  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:57:02.363764  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:57:02.863627  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:57:03.363666  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:57:03.863238  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:57:04.363699  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:57:04.863144  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:57:05.363655  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:57:05.862795  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:57:06.363580  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:57:06.863697  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:57:07.363056  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:57:07.863381  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:57:08.362969  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:57:08.863597  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:57:09.362817  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:57:09.863768  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:57:10.363112  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:57:10.863677  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:57:11.363630  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:57:11.862948  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:57:12.363055  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:57:12.863116  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:57:13.363194  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:57:13.863812  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:57:14.363772  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:57:14.863066  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:57:15.363594  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:57:15.862833  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:57:16.363057  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:57:16.863476  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:57:17.363457  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:57:17.863598  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:57:18.363200  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:57:18.863662  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:57:19.363783  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:57:19.863606  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:57:20.363513  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:57:20.863177  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:57:21.363552  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:57:21.863605  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:57:22.362969  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:57:22.863219  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:57:23.362975  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:57:23.862827  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:57:24.363759  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:57:24.862887  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:57:25.363082  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:57:25.862946  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:57:26.362806  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:57:26.862848  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:57:27.362854  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:57:27.862860  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:57:28.363238  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:57:28.863462  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:57:29.363410  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:57:29.862868  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:57:30.362806  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:57:30.863747  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:57:31.362855  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:57:31.862795  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:57:32.362839  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:57:32.863197  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:57:33.363200  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:57:33.863202  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:57:34.363417  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:57:34.863330  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:57:35.363564  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:57:35.863799  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:57:36.363495  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:57:36.862795  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:57:37.363575  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:57:37.863132  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:57:38.363245  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:57:38.863005  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0921 21:57:38.863086  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0921 21:57:38.894188  163433 cri.go:87] found id: ""
	I0921 21:57:38.894218  163433 logs.go:274] 0 containers: []
	W0921 21:57:38.894227  163433 logs.go:276] No container was found matching "kube-apiserver"
	I0921 21:57:38.894235  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0921 21:57:38.894283  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0921 21:57:38.926492  163433 cri.go:87] found id: ""
	I0921 21:57:38.926522  163433 logs.go:274] 0 containers: []
	W0921 21:57:38.926530  163433 logs.go:276] No container was found matching "etcd"
	I0921 21:57:38.926537  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0921 21:57:38.926597  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0921 21:57:38.952934  163433 cri.go:87] found id: ""
	I0921 21:57:38.952959  163433 logs.go:274] 0 containers: []
	W0921 21:57:38.952965  163433 logs.go:276] No container was found matching "coredns"
	I0921 21:57:38.952971  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0921 21:57:38.953018  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0921 21:57:38.974768  163433 cri.go:87] found id: ""
	I0921 21:57:38.974796  163433 logs.go:274] 0 containers: []
	W0921 21:57:38.974803  163433 logs.go:276] No container was found matching "kube-scheduler"
	I0921 21:57:38.974811  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0921 21:57:38.974864  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0921 21:57:39.014918  163433 cri.go:87] found id: ""
	I0921 21:57:39.014948  163433 logs.go:274] 0 containers: []
	W0921 21:57:39.014957  163433 logs.go:276] No container was found matching "kube-proxy"
	I0921 21:57:39.014964  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0921 21:57:39.015013  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0921 21:57:39.044495  163433 cri.go:87] found id: ""
	I0921 21:57:39.044521  163433 logs.go:274] 0 containers: []
	W0921 21:57:39.044531  163433 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0921 21:57:39.044539  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0921 21:57:39.044600  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0921 21:57:39.070959  163433 cri.go:87] found id: ""
	I0921 21:57:39.070983  163433 logs.go:274] 0 containers: []
	W0921 21:57:39.070989  163433 logs.go:276] No container was found matching "storage-provisioner"
	I0921 21:57:39.070995  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0921 21:57:39.071036  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0921 21:57:39.104125  163433 cri.go:87] found id: ""
	I0921 21:57:39.104156  163433 logs.go:274] 0 containers: []
	W0921 21:57:39.104165  163433 logs.go:276] No container was found matching "kube-controller-manager"
	I0921 21:57:39.104177  163433 logs.go:123] Gathering logs for kubelet ...
	I0921 21:57:39.104190  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0921 21:57:39.130272  163433 logs.go:138] Found kubelet problem: Sep 21 21:56:49 kubernetes-upgrade-20220921215522-10174 kubelet[1391]: E0921 21:56:49.371828    1391 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:39.130914  163433 logs.go:138] Found kubelet problem: Sep 21 21:56:50 kubernetes-upgrade-20220921215522-10174 kubelet[1406]: E0921 21:56:50.120343    1406 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:39.131517  163433 logs.go:138] Found kubelet problem: Sep 21 21:56:50 kubernetes-upgrade-20220921215522-10174 kubelet[1419]: E0921 21:56:50.892521    1419 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:39.132096  163433 logs.go:138] Found kubelet problem: Sep 21 21:56:51 kubernetes-upgrade-20220921215522-10174 kubelet[1433]: E0921 21:56:51.655491    1433 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:39.132480  163433 logs.go:138] Found kubelet problem: Sep 21 21:56:52 kubernetes-upgrade-20220921215522-10174 kubelet[1445]: E0921 21:56:52.410609    1445 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:39.132859  163433 logs.go:138] Found kubelet problem: Sep 21 21:56:53 kubernetes-upgrade-20220921215522-10174 kubelet[1458]: E0921 21:56:53.129607    1458 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:39.133258  163433 logs.go:138] Found kubelet problem: Sep 21 21:56:53 kubernetes-upgrade-20220921215522-10174 kubelet[1470]: E0921 21:56:53.885429    1470 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:39.133655  163433 logs.go:138] Found kubelet problem: Sep 21 21:56:54 kubernetes-upgrade-20220921215522-10174 kubelet[1485]: E0921 21:56:54.633153    1485 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:39.134077  163433 logs.go:138] Found kubelet problem: Sep 21 21:56:55 kubernetes-upgrade-20220921215522-10174 kubelet[1498]: E0921 21:56:55.380845    1498 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:39.134490  163433 logs.go:138] Found kubelet problem: Sep 21 21:56:56 kubernetes-upgrade-20220921215522-10174 kubelet[1512]: E0921 21:56:56.130685    1512 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:39.134899  163433 logs.go:138] Found kubelet problem: Sep 21 21:56:56 kubernetes-upgrade-20220921215522-10174 kubelet[1525]: E0921 21:56:56.871709    1525 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:39.135301  163433 logs.go:138] Found kubelet problem: Sep 21 21:56:57 kubernetes-upgrade-20220921215522-10174 kubelet[1540]: E0921 21:56:57.621586    1540 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:39.135706  163433 logs.go:138] Found kubelet problem: Sep 21 21:56:58 kubernetes-upgrade-20220921215522-10174 kubelet[1554]: E0921 21:56:58.373184    1554 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:39.136148  163433 logs.go:138] Found kubelet problem: Sep 21 21:56:59 kubernetes-upgrade-20220921215522-10174 kubelet[1569]: E0921 21:56:59.122077    1569 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:39.136557  163433 logs.go:138] Found kubelet problem: Sep 21 21:56:59 kubernetes-upgrade-20220921215522-10174 kubelet[1582]: E0921 21:56:59.880949    1582 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:39.136971  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:00 kubernetes-upgrade-20220921215522-10174 kubelet[1596]: E0921 21:57:00.626960    1596 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:39.137401  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:01 kubernetes-upgrade-20220921215522-10174 kubelet[1609]: E0921 21:57:01.373032    1609 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:39.137779  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:02 kubernetes-upgrade-20220921215522-10174 kubelet[1623]: E0921 21:57:02.152349    1623 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:39.138155  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:02 kubernetes-upgrade-20220921215522-10174 kubelet[1636]: E0921 21:57:02.879019    1636 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:39.138535  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:03 kubernetes-upgrade-20220921215522-10174 kubelet[1651]: E0921 21:57:03.630895    1651 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:39.138930  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:04 kubernetes-upgrade-20220921215522-10174 kubelet[1663]: E0921 21:57:04.384693    1663 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:39.139319  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:05 kubernetes-upgrade-20220921215522-10174 kubelet[1677]: E0921 21:57:05.137440    1677 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:39.139710  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:05 kubernetes-upgrade-20220921215522-10174 kubelet[1690]: E0921 21:57:05.872384    1690 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:39.140128  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:06 kubernetes-upgrade-20220921215522-10174 kubelet[1704]: E0921 21:57:06.626578    1704 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:39.140516  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:07 kubernetes-upgrade-20220921215522-10174 kubelet[1716]: E0921 21:57:07.372786    1716 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:39.140897  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:08 kubernetes-upgrade-20220921215522-10174 kubelet[1731]: E0921 21:57:08.125209    1731 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:39.141280  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:08 kubernetes-upgrade-20220921215522-10174 kubelet[1743]: E0921 21:57:08.875144    1743 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:39.141682  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:09 kubernetes-upgrade-20220921215522-10174 kubelet[1758]: E0921 21:57:09.636568    1758 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:39.142068  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:10 kubernetes-upgrade-20220921215522-10174 kubelet[1770]: E0921 21:57:10.371114    1770 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:39.142441  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:11 kubernetes-upgrade-20220921215522-10174 kubelet[1785]: E0921 21:57:11.135143    1785 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:39.142819  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:11 kubernetes-upgrade-20220921215522-10174 kubelet[1797]: E0921 21:57:11.883503    1797 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:39.143206  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:12 kubernetes-upgrade-20220921215522-10174 kubelet[1813]: E0921 21:57:12.638473    1813 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:39.143589  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:13 kubernetes-upgrade-20220921215522-10174 kubelet[1825]: E0921 21:57:13.380895    1825 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:39.144020  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:14 kubernetes-upgrade-20220921215522-10174 kubelet[1840]: E0921 21:57:14.132900    1840 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:39.144398  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:14 kubernetes-upgrade-20220921215522-10174 kubelet[1854]: E0921 21:57:14.886412    1854 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:39.144876  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:15 kubernetes-upgrade-20220921215522-10174 kubelet[1869]: E0921 21:57:15.626883    1869 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:39.145291  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:16 kubernetes-upgrade-20220921215522-10174 kubelet[1883]: E0921 21:57:16.371277    1883 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:39.145674  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:17 kubernetes-upgrade-20220921215522-10174 kubelet[1898]: E0921 21:57:17.133191    1898 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:39.146058  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:17 kubernetes-upgrade-20220921215522-10174 kubelet[1911]: E0921 21:57:17.887173    1911 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:39.146436  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:18 kubernetes-upgrade-20220921215522-10174 kubelet[1926]: E0921 21:57:18.653902    1926 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:39.146835  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:19 kubernetes-upgrade-20220921215522-10174 kubelet[1938]: E0921 21:57:19.383818    1938 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:39.147210  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:20 kubernetes-upgrade-20220921215522-10174 kubelet[1952]: E0921 21:57:20.134629    1952 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:39.147585  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:20 kubernetes-upgrade-20220921215522-10174 kubelet[1964]: E0921 21:57:20.874735    1964 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:39.147986  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:21 kubernetes-upgrade-20220921215522-10174 kubelet[1980]: E0921 21:57:21.642629    1980 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:39.148364  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:22 kubernetes-upgrade-20220921215522-10174 kubelet[1993]: E0921 21:57:22.383938    1993 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:39.148761  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:23 kubernetes-upgrade-20220921215522-10174 kubelet[2007]: E0921 21:57:23.163080    2007 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:39.149153  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:23 kubernetes-upgrade-20220921215522-10174 kubelet[2019]: E0921 21:57:23.898283    2019 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:39.149530  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:24 kubernetes-upgrade-20220921215522-10174 kubelet[2034]: E0921 21:57:24.644635    2034 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:39.149914  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:25 kubernetes-upgrade-20220921215522-10174 kubelet[2048]: E0921 21:57:25.386236    2048 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:39.150298  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:26 kubernetes-upgrade-20220921215522-10174 kubelet[2062]: E0921 21:57:26.138653    2062 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:39.150674  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:26 kubernetes-upgrade-20220921215522-10174 kubelet[2074]: E0921 21:57:26.881606    2074 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:39.151060  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:27 kubernetes-upgrade-20220921215522-10174 kubelet[2089]: E0921 21:57:27.634384    2089 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:39.151448  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:28 kubernetes-upgrade-20220921215522-10174 kubelet[2101]: E0921 21:57:28.380595    2101 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:39.151849  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:29 kubernetes-upgrade-20220921215522-10174 kubelet[2115]: E0921 21:57:29.141993    2115 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:39.152234  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:29 kubernetes-upgrade-20220921215522-10174 kubelet[2127]: E0921 21:57:29.881873    2127 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:39.152615  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:30 kubernetes-upgrade-20220921215522-10174 kubelet[2141]: E0921 21:57:30.643982    2141 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:39.152990  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:31 kubernetes-upgrade-20220921215522-10174 kubelet[2153]: E0921 21:57:31.387353    2153 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:39.153364  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:32 kubernetes-upgrade-20220921215522-10174 kubelet[2167]: E0921 21:57:32.141004    2167 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:39.153744  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:32 kubernetes-upgrade-20220921215522-10174 kubelet[2180]: E0921 21:57:32.874796    2180 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:39.154127  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:33 kubernetes-upgrade-20220921215522-10174 kubelet[2195]: E0921 21:57:33.642006    2195 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:39.154507  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:34 kubernetes-upgrade-20220921215522-10174 kubelet[2208]: E0921 21:57:34.374657    2208 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:39.154901  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:35 kubernetes-upgrade-20220921215522-10174 kubelet[2222]: E0921 21:57:35.147067    2222 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:39.155278  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:35 kubernetes-upgrade-20220921215522-10174 kubelet[2235]: E0921 21:57:35.879490    2235 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:39.155679  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:36 kubernetes-upgrade-20220921215522-10174 kubelet[2250]: E0921 21:57:36.640238    2250 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:39.156091  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:37 kubernetes-upgrade-20220921215522-10174 kubelet[2263]: E0921 21:57:37.374557    2263 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:39.156477  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:38 kubernetes-upgrade-20220921215522-10174 kubelet[2278]: E0921 21:57:38.141807    2278 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:39.156890  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:38 kubernetes-upgrade-20220921215522-10174 kubelet[2290]: E0921 21:57:38.876371    2290 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0921 21:57:39.157024  163433 logs.go:123] Gathering logs for dmesg ...
	I0921 21:57:39.157039  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0921 21:57:39.173331  163433 logs.go:123] Gathering logs for describe nodes ...
	I0921 21:57:39.173366  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0921 21:57:39.248752  163433 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0921 21:57:39.248782  163433 logs.go:123] Gathering logs for containerd ...
	I0921 21:57:39.248801  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0921 21:57:39.292441  163433 logs.go:123] Gathering logs for container status ...
	I0921 21:57:39.292484  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0921 21:57:39.327194  163433 out.go:309] Setting ErrFile to fd 2...
	I0921 21:57:39.327244  163433 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0921 21:57:39.327410  163433 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0921 21:57:39.327431  163433 out.go:239]   Sep 21 21:57:35 kubernetes-upgrade-20220921215522-10174 kubelet[2235]: E0921 21:57:35.879490    2235 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Sep 21 21:57:35 kubernetes-upgrade-20220921215522-10174 kubelet[2235]: E0921 21:57:35.879490    2235 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:39.327439  163433 out.go:239]   Sep 21 21:57:36 kubernetes-upgrade-20220921215522-10174 kubelet[2250]: E0921 21:57:36.640238    2250 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Sep 21 21:57:36 kubernetes-upgrade-20220921215522-10174 kubelet[2250]: E0921 21:57:36.640238    2250 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:39.327447  163433 out.go:239]   Sep 21 21:57:37 kubernetes-upgrade-20220921215522-10174 kubelet[2263]: E0921 21:57:37.374557    2263 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Sep 21 21:57:37 kubernetes-upgrade-20220921215522-10174 kubelet[2263]: E0921 21:57:37.374557    2263 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:39.327461  163433 out.go:239]   Sep 21 21:57:38 kubernetes-upgrade-20220921215522-10174 kubelet[2278]: E0921 21:57:38.141807    2278 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Sep 21 21:57:38 kubernetes-upgrade-20220921215522-10174 kubelet[2278]: E0921 21:57:38.141807    2278 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:39.327469  163433 out.go:239]   Sep 21 21:57:38 kubernetes-upgrade-20220921215522-10174 kubelet[2290]: E0921 21:57:38.876371    2290 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Sep 21 21:57:38 kubernetes-upgrade-20220921215522-10174 kubelet[2290]: E0921 21:57:38.876371    2290 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0921 21:57:39.327476  163433 out.go:309] Setting ErrFile to fd 2...
	I0921 21:57:39.327485  163433 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 21:57:49.328705  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:57:49.362930  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0921 21:57:49.363009  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0921 21:57:49.393912  163433 cri.go:87] found id: ""
	I0921 21:57:49.393948  163433 logs.go:274] 0 containers: []
	W0921 21:57:49.393958  163433 logs.go:276] No container was found matching "kube-apiserver"
	I0921 21:57:49.393966  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0921 21:57:49.394028  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0921 21:57:49.422659  163433 cri.go:87] found id: ""
	I0921 21:57:49.422690  163433 logs.go:274] 0 containers: []
	W0921 21:57:49.422699  163433 logs.go:276] No container was found matching "etcd"
	I0921 21:57:49.422705  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0921 21:57:49.422768  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0921 21:57:49.447909  163433 cri.go:87] found id: ""
	I0921 21:57:49.447936  163433 logs.go:274] 0 containers: []
	W0921 21:57:49.447943  163433 logs.go:276] No container was found matching "coredns"
	I0921 21:57:49.447949  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0921 21:57:49.448002  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0921 21:57:49.470954  163433 cri.go:87] found id: ""
	I0921 21:57:49.470981  163433 logs.go:274] 0 containers: []
	W0921 21:57:49.470988  163433 logs.go:276] No container was found matching "kube-scheduler"
	I0921 21:57:49.470993  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0921 21:57:49.471035  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0921 21:57:49.495362  163433 cri.go:87] found id: ""
	I0921 21:57:49.495390  163433 logs.go:274] 0 containers: []
	W0921 21:57:49.495398  163433 logs.go:276] No container was found matching "kube-proxy"
	I0921 21:57:49.495407  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0921 21:57:49.495465  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0921 21:57:49.520329  163433 cri.go:87] found id: ""
	I0921 21:57:49.520352  163433 logs.go:274] 0 containers: []
	W0921 21:57:49.520360  163433 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0921 21:57:49.520367  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0921 21:57:49.520406  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0921 21:57:49.544637  163433 cri.go:87] found id: ""
	I0921 21:57:49.544663  163433 logs.go:274] 0 containers: []
	W0921 21:57:49.544669  163433 logs.go:276] No container was found matching "storage-provisioner"
	I0921 21:57:49.544677  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0921 21:57:49.544739  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0921 21:57:49.569543  163433 cri.go:87] found id: ""
	I0921 21:57:49.569572  163433 logs.go:274] 0 containers: []
	W0921 21:57:49.569581  163433 logs.go:276] No container was found matching "kube-controller-manager"
	I0921 21:57:49.569593  163433 logs.go:123] Gathering logs for kubelet ...
	I0921 21:57:49.569607  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0921 21:57:49.586699  163433 logs.go:138] Found kubelet problem: Sep 21 21:56:59 kubernetes-upgrade-20220921215522-10174 kubelet[1582]: E0921 21:56:59.880949    1582 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:49.587358  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:00 kubernetes-upgrade-20220921215522-10174 kubelet[1596]: E0921 21:57:00.626960    1596 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:49.588060  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:01 kubernetes-upgrade-20220921215522-10174 kubelet[1609]: E0921 21:57:01.373032    1609 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:49.588754  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:02 kubernetes-upgrade-20220921215522-10174 kubelet[1623]: E0921 21:57:02.152349    1623 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:49.589453  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:02 kubernetes-upgrade-20220921215522-10174 kubelet[1636]: E0921 21:57:02.879019    1636 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:49.590100  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:03 kubernetes-upgrade-20220921215522-10174 kubelet[1651]: E0921 21:57:03.630895    1651 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:49.590749  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:04 kubernetes-upgrade-20220921215522-10174 kubelet[1663]: E0921 21:57:04.384693    1663 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:49.591388  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:05 kubernetes-upgrade-20220921215522-10174 kubelet[1677]: E0921 21:57:05.137440    1677 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:49.592055  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:05 kubernetes-upgrade-20220921215522-10174 kubelet[1690]: E0921 21:57:05.872384    1690 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:49.592474  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:06 kubernetes-upgrade-20220921215522-10174 kubelet[1704]: E0921 21:57:06.626578    1704 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:49.592911  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:07 kubernetes-upgrade-20220921215522-10174 kubelet[1716]: E0921 21:57:07.372786    1716 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:49.593360  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:08 kubernetes-upgrade-20220921215522-10174 kubelet[1731]: E0921 21:57:08.125209    1731 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:49.593848  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:08 kubernetes-upgrade-20220921215522-10174 kubelet[1743]: E0921 21:57:08.875144    1743 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:49.594315  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:09 kubernetes-upgrade-20220921215522-10174 kubelet[1758]: E0921 21:57:09.636568    1758 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:49.594762  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:10 kubernetes-upgrade-20220921215522-10174 kubelet[1770]: E0921 21:57:10.371114    1770 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:49.595201  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:11 kubernetes-upgrade-20220921215522-10174 kubelet[1785]: E0921 21:57:11.135143    1785 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:49.595734  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:11 kubernetes-upgrade-20220921215522-10174 kubelet[1797]: E0921 21:57:11.883503    1797 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:49.596260  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:12 kubernetes-upgrade-20220921215522-10174 kubelet[1813]: E0921 21:57:12.638473    1813 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:49.596772  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:13 kubernetes-upgrade-20220921215522-10174 kubelet[1825]: E0921 21:57:13.380895    1825 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:49.597473  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:14 kubernetes-upgrade-20220921215522-10174 kubelet[1840]: E0921 21:57:14.132900    1840 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:49.598047  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:14 kubernetes-upgrade-20220921215522-10174 kubelet[1854]: E0921 21:57:14.886412    1854 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:49.598503  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:15 kubernetes-upgrade-20220921215522-10174 kubelet[1869]: E0921 21:57:15.626883    1869 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:49.599095  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:16 kubernetes-upgrade-20220921215522-10174 kubelet[1883]: E0921 21:57:16.371277    1883 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:49.599844  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:17 kubernetes-upgrade-20220921215522-10174 kubelet[1898]: E0921 21:57:17.133191    1898 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:49.600489  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:17 kubernetes-upgrade-20220921215522-10174 kubelet[1911]: E0921 21:57:17.887173    1911 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:49.601097  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:18 kubernetes-upgrade-20220921215522-10174 kubelet[1926]: E0921 21:57:18.653902    1926 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:49.601709  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:19 kubernetes-upgrade-20220921215522-10174 kubelet[1938]: E0921 21:57:19.383818    1938 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:49.602172  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:20 kubernetes-upgrade-20220921215522-10174 kubelet[1952]: E0921 21:57:20.134629    1952 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:49.602682  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:20 kubernetes-upgrade-20220921215522-10174 kubelet[1964]: E0921 21:57:20.874735    1964 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:49.603265  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:21 kubernetes-upgrade-20220921215522-10174 kubelet[1980]: E0921 21:57:21.642629    1980 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:49.603813  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:22 kubernetes-upgrade-20220921215522-10174 kubelet[1993]: E0921 21:57:22.383938    1993 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:49.604222  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:23 kubernetes-upgrade-20220921215522-10174 kubelet[2007]: E0921 21:57:23.163080    2007 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:49.604649  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:23 kubernetes-upgrade-20220921215522-10174 kubelet[2019]: E0921 21:57:23.898283    2019 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:49.605060  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:24 kubernetes-upgrade-20220921215522-10174 kubelet[2034]: E0921 21:57:24.644635    2034 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:49.605477  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:25 kubernetes-upgrade-20220921215522-10174 kubelet[2048]: E0921 21:57:25.386236    2048 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:49.605899  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:26 kubernetes-upgrade-20220921215522-10174 kubelet[2062]: E0921 21:57:26.138653    2062 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:49.606310  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:26 kubernetes-upgrade-20220921215522-10174 kubelet[2074]: E0921 21:57:26.881606    2074 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:49.606745  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:27 kubernetes-upgrade-20220921215522-10174 kubelet[2089]: E0921 21:57:27.634384    2089 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:49.607190  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:28 kubernetes-upgrade-20220921215522-10174 kubelet[2101]: E0921 21:57:28.380595    2101 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:49.607610  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:29 kubernetes-upgrade-20220921215522-10174 kubelet[2115]: E0921 21:57:29.141993    2115 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:49.608058  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:29 kubernetes-upgrade-20220921215522-10174 kubelet[2127]: E0921 21:57:29.881873    2127 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:49.608460  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:30 kubernetes-upgrade-20220921215522-10174 kubelet[2141]: E0921 21:57:30.643982    2141 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:49.608865  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:31 kubernetes-upgrade-20220921215522-10174 kubelet[2153]: E0921 21:57:31.387353    2153 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:49.609284  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:32 kubernetes-upgrade-20220921215522-10174 kubelet[2167]: E0921 21:57:32.141004    2167 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:49.609705  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:32 kubernetes-upgrade-20220921215522-10174 kubelet[2180]: E0921 21:57:32.874796    2180 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:49.610129  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:33 kubernetes-upgrade-20220921215522-10174 kubelet[2195]: E0921 21:57:33.642006    2195 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:49.610697  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:34 kubernetes-upgrade-20220921215522-10174 kubelet[2208]: E0921 21:57:34.374657    2208 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:49.611151  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:35 kubernetes-upgrade-20220921215522-10174 kubelet[2222]: E0921 21:57:35.147067    2222 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:49.611780  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:35 kubernetes-upgrade-20220921215522-10174 kubelet[2235]: E0921 21:57:35.879490    2235 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:49.612390  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:36 kubernetes-upgrade-20220921215522-10174 kubelet[2250]: E0921 21:57:36.640238    2250 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:49.613069  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:37 kubernetes-upgrade-20220921215522-10174 kubelet[2263]: E0921 21:57:37.374557    2263 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:49.613511  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:38 kubernetes-upgrade-20220921215522-10174 kubelet[2278]: E0921 21:57:38.141807    2278 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:49.613927  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:38 kubernetes-upgrade-20220921215522-10174 kubelet[2290]: E0921 21:57:38.876371    2290 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:49.614329  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:39 kubernetes-upgrade-20220921215522-10174 kubelet[2436]: E0921 21:57:39.640823    2436 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:49.614732  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:40 kubernetes-upgrade-20220921215522-10174 kubelet[2448]: E0921 21:57:40.380285    2448 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:49.615137  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:41 kubernetes-upgrade-20220921215522-10174 kubelet[2460]: E0921 21:57:41.157225    2460 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:49.615525  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:41 kubernetes-upgrade-20220921215522-10174 kubelet[2471]: E0921 21:57:41.891401    2471 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:49.615971  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:42 kubernetes-upgrade-20220921215522-10174 kubelet[2482]: E0921 21:57:42.649592    2482 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:49.616375  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:43 kubernetes-upgrade-20220921215522-10174 kubelet[2492]: E0921 21:57:43.370917    2492 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:49.616776  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:44 kubernetes-upgrade-20220921215522-10174 kubelet[2503]: E0921 21:57:44.139479    2503 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:49.617183  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:44 kubernetes-upgrade-20220921215522-10174 kubelet[2514]: E0921 21:57:44.889947    2514 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:49.617594  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:45 kubernetes-upgrade-20220921215522-10174 kubelet[2524]: E0921 21:57:45.634469    2524 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:49.618021  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:46 kubernetes-upgrade-20220921215522-10174 kubelet[2536]: E0921 21:57:46.371547    2536 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:49.618424  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:47 kubernetes-upgrade-20220921215522-10174 kubelet[2547]: E0921 21:57:47.146381    2547 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:49.618804  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:47 kubernetes-upgrade-20220921215522-10174 kubelet[2558]: E0921 21:57:47.871171    2558 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:49.619211  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:48 kubernetes-upgrade-20220921215522-10174 kubelet[2569]: E0921 21:57:48.647840    2569 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:49.619594  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:49 kubernetes-upgrade-20220921215522-10174 kubelet[2579]: E0921 21:57:49.373797    2579 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0921 21:57:49.619736  163433 logs.go:123] Gathering logs for dmesg ...
	I0921 21:57:49.619764  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0921 21:57:49.638052  163433 logs.go:123] Gathering logs for describe nodes ...
	I0921 21:57:49.638090  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0921 21:57:49.697023  163433 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0921 21:57:49.697051  163433 logs.go:123] Gathering logs for containerd ...
	I0921 21:57:49.697062  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0921 21:57:49.733721  163433 logs.go:123] Gathering logs for container status ...
	I0921 21:57:49.733750  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0921 21:57:49.760424  163433 out.go:309] Setting ErrFile to fd 2...
	I0921 21:57:49.760459  163433 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0921 21:57:49.760567  163433 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0921 21:57:49.760584  163433 out.go:239]   Sep 21 21:57:46 kubernetes-upgrade-20220921215522-10174 kubelet[2536]: E0921 21:57:46.371547    2536 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Sep 21 21:57:46 kubernetes-upgrade-20220921215522-10174 kubelet[2536]: E0921 21:57:46.371547    2536 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:49.760589  163433 out.go:239]   Sep 21 21:57:47 kubernetes-upgrade-20220921215522-10174 kubelet[2547]: E0921 21:57:47.146381    2547 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Sep 21 21:57:47 kubernetes-upgrade-20220921215522-10174 kubelet[2547]: E0921 21:57:47.146381    2547 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:49.760594  163433 out.go:239]   Sep 21 21:57:47 kubernetes-upgrade-20220921215522-10174 kubelet[2558]: E0921 21:57:47.871171    2558 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Sep 21 21:57:47 kubernetes-upgrade-20220921215522-10174 kubelet[2558]: E0921 21:57:47.871171    2558 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:49.760607  163433 out.go:239]   Sep 21 21:57:48 kubernetes-upgrade-20220921215522-10174 kubelet[2569]: E0921 21:57:48.647840    2569 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Sep 21 21:57:48 kubernetes-upgrade-20220921215522-10174 kubelet[2569]: E0921 21:57:48.647840    2569 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:57:49.760615  163433 out.go:239]   Sep 21 21:57:49 kubernetes-upgrade-20220921215522-10174 kubelet[2579]: E0921 21:57:49.373797    2579 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Sep 21 21:57:49 kubernetes-upgrade-20220921215522-10174 kubelet[2579]: E0921 21:57:49.373797    2579 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0921 21:57:49.760626  163433 out.go:309] Setting ErrFile to fd 2...
	I0921 21:57:49.760633  163433 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 21:57:59.761918  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:57:59.862750  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0921 21:57:59.862841  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0921 21:57:59.886652  163433 cri.go:87] found id: ""
	I0921 21:57:59.886674  163433 logs.go:274] 0 containers: []
	W0921 21:57:59.886681  163433 logs.go:276] No container was found matching "kube-apiserver"
	I0921 21:57:59.886687  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0921 21:57:59.886739  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0921 21:57:59.910351  163433 cri.go:87] found id: ""
	I0921 21:57:59.910378  163433 logs.go:274] 0 containers: []
	W0921 21:57:59.910387  163433 logs.go:276] No container was found matching "etcd"
	I0921 21:57:59.910395  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0921 21:57:59.910452  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0921 21:57:59.934483  163433 cri.go:87] found id: ""
	I0921 21:57:59.934515  163433 logs.go:274] 0 containers: []
	W0921 21:57:59.934525  163433 logs.go:276] No container was found matching "coredns"
	I0921 21:57:59.934532  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0921 21:57:59.934608  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0921 21:57:59.958298  163433 cri.go:87] found id: ""
	I0921 21:57:59.958329  163433 logs.go:274] 0 containers: []
	W0921 21:57:59.958337  163433 logs.go:276] No container was found matching "kube-scheduler"
	I0921 21:57:59.958342  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0921 21:57:59.958391  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0921 21:57:59.982031  163433 cri.go:87] found id: ""
	I0921 21:57:59.982060  163433 logs.go:274] 0 containers: []
	W0921 21:57:59.982068  163433 logs.go:276] No container was found matching "kube-proxy"
	I0921 21:57:59.982077  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0921 21:57:59.982127  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0921 21:58:00.006566  163433 cri.go:87] found id: ""
	I0921 21:58:00.006589  163433 logs.go:274] 0 containers: []
	W0921 21:58:00.006595  163433 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0921 21:58:00.006601  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0921 21:58:00.006644  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0921 21:58:00.040965  163433 cri.go:87] found id: ""
	I0921 21:58:00.040991  163433 logs.go:274] 0 containers: []
	W0921 21:58:00.041000  163433 logs.go:276] No container was found matching "storage-provisioner"
	I0921 21:58:00.041007  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0921 21:58:00.041128  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0921 21:58:00.064646  163433 cri.go:87] found id: ""
	I0921 21:58:00.064670  163433 logs.go:274] 0 containers: []
	W0921 21:58:00.064677  163433 logs.go:276] No container was found matching "kube-controller-manager"
	I0921 21:58:00.064686  163433 logs.go:123] Gathering logs for dmesg ...
	I0921 21:58:00.064701  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0921 21:58:00.078915  163433 logs.go:123] Gathering logs for describe nodes ...
	I0921 21:58:00.078943  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0921 21:58:00.132975  163433 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0921 21:58:00.132997  163433 logs.go:123] Gathering logs for containerd ...
	I0921 21:58:00.133007  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0921 21:58:00.170716  163433 logs.go:123] Gathering logs for container status ...
	I0921 21:58:00.170752  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0921 21:58:00.197469  163433 logs.go:123] Gathering logs for kubelet ...
	I0921 21:58:00.197500  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0921 21:58:00.216133  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:10 kubernetes-upgrade-20220921215522-10174 kubelet[1770]: E0921 21:57:10.371114    1770 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:00.216577  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:11 kubernetes-upgrade-20220921215522-10174 kubelet[1785]: E0921 21:57:11.135143    1785 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:00.217033  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:11 kubernetes-upgrade-20220921215522-10174 kubelet[1797]: E0921 21:57:11.883503    1797 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:00.217523  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:12 kubernetes-upgrade-20220921215522-10174 kubelet[1813]: E0921 21:57:12.638473    1813 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:00.218027  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:13 kubernetes-upgrade-20220921215522-10174 kubelet[1825]: E0921 21:57:13.380895    1825 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:00.218500  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:14 kubernetes-upgrade-20220921215522-10174 kubelet[1840]: E0921 21:57:14.132900    1840 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:00.218934  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:14 kubernetes-upgrade-20220921215522-10174 kubelet[1854]: E0921 21:57:14.886412    1854 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:00.219384  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:15 kubernetes-upgrade-20220921215522-10174 kubelet[1869]: E0921 21:57:15.626883    1869 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:00.220026  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:16 kubernetes-upgrade-20220921215522-10174 kubelet[1883]: E0921 21:57:16.371277    1883 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:00.220675  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:17 kubernetes-upgrade-20220921215522-10174 kubelet[1898]: E0921 21:57:17.133191    1898 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:00.221323  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:17 kubernetes-upgrade-20220921215522-10174 kubelet[1911]: E0921 21:57:17.887173    1911 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:00.221898  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:18 kubernetes-upgrade-20220921215522-10174 kubelet[1926]: E0921 21:57:18.653902    1926 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:00.222337  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:19 kubernetes-upgrade-20220921215522-10174 kubelet[1938]: E0921 21:57:19.383818    1938 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:00.222733  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:20 kubernetes-upgrade-20220921215522-10174 kubelet[1952]: E0921 21:57:20.134629    1952 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:00.223174  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:20 kubernetes-upgrade-20220921215522-10174 kubelet[1964]: E0921 21:57:20.874735    1964 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:00.223604  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:21 kubernetes-upgrade-20220921215522-10174 kubelet[1980]: E0921 21:57:21.642629    1980 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:00.224078  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:22 kubernetes-upgrade-20220921215522-10174 kubelet[1993]: E0921 21:57:22.383938    1993 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:00.224540  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:23 kubernetes-upgrade-20220921215522-10174 kubelet[2007]: E0921 21:57:23.163080    2007 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:00.224973  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:23 kubernetes-upgrade-20220921215522-10174 kubelet[2019]: E0921 21:57:23.898283    2019 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:00.225412  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:24 kubernetes-upgrade-20220921215522-10174 kubelet[2034]: E0921 21:57:24.644635    2034 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:00.225837  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:25 kubernetes-upgrade-20220921215522-10174 kubelet[2048]: E0921 21:57:25.386236    2048 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:00.226296  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:26 kubernetes-upgrade-20220921215522-10174 kubelet[2062]: E0921 21:57:26.138653    2062 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:00.226739  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:26 kubernetes-upgrade-20220921215522-10174 kubelet[2074]: E0921 21:57:26.881606    2074 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:00.227230  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:27 kubernetes-upgrade-20220921215522-10174 kubelet[2089]: E0921 21:57:27.634384    2089 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:00.227674  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:28 kubernetes-upgrade-20220921215522-10174 kubelet[2101]: E0921 21:57:28.380595    2101 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:00.228127  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:29 kubernetes-upgrade-20220921215522-10174 kubelet[2115]: E0921 21:57:29.141993    2115 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:00.228552  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:29 kubernetes-upgrade-20220921215522-10174 kubelet[2127]: E0921 21:57:29.881873    2127 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:00.228964  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:30 kubernetes-upgrade-20220921215522-10174 kubelet[2141]: E0921 21:57:30.643982    2141 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:00.229390  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:31 kubernetes-upgrade-20220921215522-10174 kubelet[2153]: E0921 21:57:31.387353    2153 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:00.229787  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:32 kubernetes-upgrade-20220921215522-10174 kubelet[2167]: E0921 21:57:32.141004    2167 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:00.230272  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:32 kubernetes-upgrade-20220921215522-10174 kubelet[2180]: E0921 21:57:32.874796    2180 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:00.230750  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:33 kubernetes-upgrade-20220921215522-10174 kubelet[2195]: E0921 21:57:33.642006    2195 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:00.231170  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:34 kubernetes-upgrade-20220921215522-10174 kubelet[2208]: E0921 21:57:34.374657    2208 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:00.231653  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:35 kubernetes-upgrade-20220921215522-10174 kubelet[2222]: E0921 21:57:35.147067    2222 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:00.232100  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:35 kubernetes-upgrade-20220921215522-10174 kubelet[2235]: E0921 21:57:35.879490    2235 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:00.232511  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:36 kubernetes-upgrade-20220921215522-10174 kubelet[2250]: E0921 21:57:36.640238    2250 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:00.232952  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:37 kubernetes-upgrade-20220921215522-10174 kubelet[2263]: E0921 21:57:37.374557    2263 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:00.233360  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:38 kubernetes-upgrade-20220921215522-10174 kubelet[2278]: E0921 21:57:38.141807    2278 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:00.233765  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:38 kubernetes-upgrade-20220921215522-10174 kubelet[2290]: E0921 21:57:38.876371    2290 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:00.234168  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:39 kubernetes-upgrade-20220921215522-10174 kubelet[2436]: E0921 21:57:39.640823    2436 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:00.234732  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:40 kubernetes-upgrade-20220921215522-10174 kubelet[2448]: E0921 21:57:40.380285    2448 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:00.235381  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:41 kubernetes-upgrade-20220921215522-10174 kubelet[2460]: E0921 21:57:41.157225    2460 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:00.236005  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:41 kubernetes-upgrade-20220921215522-10174 kubelet[2471]: E0921 21:57:41.891401    2471 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:00.236638  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:42 kubernetes-upgrade-20220921215522-10174 kubelet[2482]: E0921 21:57:42.649592    2482 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:00.237029  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:43 kubernetes-upgrade-20220921215522-10174 kubelet[2492]: E0921 21:57:43.370917    2492 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:00.237430  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:44 kubernetes-upgrade-20220921215522-10174 kubelet[2503]: E0921 21:57:44.139479    2503 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:00.237815  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:44 kubernetes-upgrade-20220921215522-10174 kubelet[2514]: E0921 21:57:44.889947    2514 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:00.238226  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:45 kubernetes-upgrade-20220921215522-10174 kubelet[2524]: E0921 21:57:45.634469    2524 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:00.238621  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:46 kubernetes-upgrade-20220921215522-10174 kubelet[2536]: E0921 21:57:46.371547    2536 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:00.239164  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:47 kubernetes-upgrade-20220921215522-10174 kubelet[2547]: E0921 21:57:47.146381    2547 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:00.239665  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:47 kubernetes-upgrade-20220921215522-10174 kubelet[2558]: E0921 21:57:47.871171    2558 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:00.240211  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:48 kubernetes-upgrade-20220921215522-10174 kubelet[2569]: E0921 21:57:48.647840    2569 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:00.240602  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:49 kubernetes-upgrade-20220921215522-10174 kubelet[2579]: E0921 21:57:49.373797    2579 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:00.241031  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:50 kubernetes-upgrade-20220921215522-10174 kubelet[2729]: E0921 21:57:50.127167    2729 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:00.241448  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:50 kubernetes-upgrade-20220921215522-10174 kubelet[2739]: E0921 21:57:50.874508    2739 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:00.241850  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:51 kubernetes-upgrade-20220921215522-10174 kubelet[2750]: E0921 21:57:51.642311    2750 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:00.242286  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:52 kubernetes-upgrade-20220921215522-10174 kubelet[2760]: E0921 21:57:52.371286    2760 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:00.242723  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:53 kubernetes-upgrade-20220921215522-10174 kubelet[2772]: E0921 21:57:53.128949    2772 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:00.243184  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:53 kubernetes-upgrade-20220921215522-10174 kubelet[2782]: E0921 21:57:53.875177    2782 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:00.243632  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:54 kubernetes-upgrade-20220921215522-10174 kubelet[2792]: E0921 21:57:54.628094    2792 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:00.244173  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:55 kubernetes-upgrade-20220921215522-10174 kubelet[2803]: E0921 21:57:55.390864    2803 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:00.244743  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:56 kubernetes-upgrade-20220921215522-10174 kubelet[2813]: E0921 21:57:56.138948    2813 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:00.245166  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:56 kubernetes-upgrade-20220921215522-10174 kubelet[2823]: E0921 21:57:56.875520    2823 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:00.245573  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:57 kubernetes-upgrade-20220921215522-10174 kubelet[2834]: E0921 21:57:57.636715    2834 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:00.245977  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:58 kubernetes-upgrade-20220921215522-10174 kubelet[2844]: E0921 21:57:58.375773    2844 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:00.246388  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:59 kubernetes-upgrade-20220921215522-10174 kubelet[2855]: E0921 21:57:59.130098    2855 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:00.246795  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:59 kubernetes-upgrade-20220921215522-10174 kubelet[2868]: E0921 21:57:59.873878    2868 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0921 21:58:00.246949  163433 out.go:309] Setting ErrFile to fd 2...
	I0921 21:58:00.246965  163433 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0921 21:58:00.247072  163433 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0921 21:58:00.247090  163433 out.go:239]   Sep 21 21:57:56 kubernetes-upgrade-20220921215522-10174 kubelet[2823]: E0921 21:57:56.875520    2823 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Sep 21 21:57:56 kubernetes-upgrade-20220921215522-10174 kubelet[2823]: E0921 21:57:56.875520    2823 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:00.247100  163433 out.go:239]   Sep 21 21:57:57 kubernetes-upgrade-20220921215522-10174 kubelet[2834]: E0921 21:57:57.636715    2834 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Sep 21 21:57:57 kubernetes-upgrade-20220921215522-10174 kubelet[2834]: E0921 21:57:57.636715    2834 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:00.247109  163433 out.go:239]   Sep 21 21:57:58 kubernetes-upgrade-20220921215522-10174 kubelet[2844]: E0921 21:57:58.375773    2844 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Sep 21 21:57:58 kubernetes-upgrade-20220921215522-10174 kubelet[2844]: E0921 21:57:58.375773    2844 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:00.247125  163433 out.go:239]   Sep 21 21:57:59 kubernetes-upgrade-20220921215522-10174 kubelet[2855]: E0921 21:57:59.130098    2855 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Sep 21 21:57:59 kubernetes-upgrade-20220921215522-10174 kubelet[2855]: E0921 21:57:59.130098    2855 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:00.247132  163433 out.go:239]   Sep 21 21:57:59 kubernetes-upgrade-20220921215522-10174 kubelet[2868]: E0921 21:57:59.873878    2868 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Sep 21 21:57:59 kubernetes-upgrade-20220921215522-10174 kubelet[2868]: E0921 21:57:59.873878    2868 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0921 21:58:00.247138  163433 out.go:309] Setting ErrFile to fd 2...
	I0921 21:58:00.247151  163433 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 21:58:10.248903  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:58:10.363021  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0921 21:58:10.363102  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0921 21:58:10.394069  163433 cri.go:87] found id: ""
	I0921 21:58:10.394099  163433 logs.go:274] 0 containers: []
	W0921 21:58:10.394108  163433 logs.go:276] No container was found matching "kube-apiserver"
	I0921 21:58:10.394115  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0921 21:58:10.394166  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0921 21:58:10.424049  163433 cri.go:87] found id: ""
	I0921 21:58:10.424080  163433 logs.go:274] 0 containers: []
	W0921 21:58:10.424089  163433 logs.go:276] No container was found matching "etcd"
	I0921 21:58:10.424097  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0921 21:58:10.424148  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0921 21:58:10.447255  163433 cri.go:87] found id: ""
	I0921 21:58:10.447278  163433 logs.go:274] 0 containers: []
	W0921 21:58:10.447285  163433 logs.go:276] No container was found matching "coredns"
	I0921 21:58:10.447290  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0921 21:58:10.447340  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0921 21:58:10.470403  163433 cri.go:87] found id: ""
	I0921 21:58:10.470427  163433 logs.go:274] 0 containers: []
	W0921 21:58:10.470433  163433 logs.go:276] No container was found matching "kube-scheduler"
	I0921 21:58:10.470439  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0921 21:58:10.470490  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0921 21:58:10.496364  163433 cri.go:87] found id: ""
	I0921 21:58:10.496393  163433 logs.go:274] 0 containers: []
	W0921 21:58:10.496402  163433 logs.go:276] No container was found matching "kube-proxy"
	I0921 21:58:10.496409  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0921 21:58:10.496468  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0921 21:58:10.520943  163433 cri.go:87] found id: ""
	I0921 21:58:10.520969  163433 logs.go:274] 0 containers: []
	W0921 21:58:10.520979  163433 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0921 21:58:10.520987  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0921 21:58:10.521056  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0921 21:58:10.543829  163433 cri.go:87] found id: ""
	I0921 21:58:10.543852  163433 logs.go:274] 0 containers: []
	W0921 21:58:10.543858  163433 logs.go:276] No container was found matching "storage-provisioner"
	I0921 21:58:10.543863  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0921 21:58:10.543921  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0921 21:58:10.566214  163433 cri.go:87] found id: ""
	I0921 21:58:10.566239  163433 logs.go:274] 0 containers: []
	W0921 21:58:10.566245  163433 logs.go:276] No container was found matching "kube-controller-manager"
	I0921 21:58:10.566253  163433 logs.go:123] Gathering logs for containerd ...
	I0921 21:58:10.566266  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0921 21:58:10.622778  163433 logs.go:123] Gathering logs for container status ...
	I0921 21:58:10.622815  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0921 21:58:10.650026  163433 logs.go:123] Gathering logs for kubelet ...
	I0921 21:58:10.650052  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0921 21:58:10.665443  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:20 kubernetes-upgrade-20220921215522-10174 kubelet[1964]: E0921 21:57:20.874735    1964 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:10.665852  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:21 kubernetes-upgrade-20220921215522-10174 kubelet[1980]: E0921 21:57:21.642629    1980 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:10.666243  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:22 kubernetes-upgrade-20220921215522-10174 kubelet[1993]: E0921 21:57:22.383938    1993 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:10.666643  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:23 kubernetes-upgrade-20220921215522-10174 kubelet[2007]: E0921 21:57:23.163080    2007 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:10.667035  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:23 kubernetes-upgrade-20220921215522-10174 kubelet[2019]: E0921 21:57:23.898283    2019 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:10.667426  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:24 kubernetes-upgrade-20220921215522-10174 kubelet[2034]: E0921 21:57:24.644635    2034 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:10.667884  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:25 kubernetes-upgrade-20220921215522-10174 kubelet[2048]: E0921 21:57:25.386236    2048 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:10.668282  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:26 kubernetes-upgrade-20220921215522-10174 kubelet[2062]: E0921 21:57:26.138653    2062 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:10.668674  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:26 kubernetes-upgrade-20220921215522-10174 kubelet[2074]: E0921 21:57:26.881606    2074 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:10.669060  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:27 kubernetes-upgrade-20220921215522-10174 kubelet[2089]: E0921 21:57:27.634384    2089 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:10.669471  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:28 kubernetes-upgrade-20220921215522-10174 kubelet[2101]: E0921 21:57:28.380595    2101 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:10.669870  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:29 kubernetes-upgrade-20220921215522-10174 kubelet[2115]: E0921 21:57:29.141993    2115 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:10.670268  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:29 kubernetes-upgrade-20220921215522-10174 kubelet[2127]: E0921 21:57:29.881873    2127 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:10.670671  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:30 kubernetes-upgrade-20220921215522-10174 kubelet[2141]: E0921 21:57:30.643982    2141 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:10.671089  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:31 kubernetes-upgrade-20220921215522-10174 kubelet[2153]: E0921 21:57:31.387353    2153 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:10.671487  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:32 kubernetes-upgrade-20220921215522-10174 kubelet[2167]: E0921 21:57:32.141004    2167 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:10.671934  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:32 kubernetes-upgrade-20220921215522-10174 kubelet[2180]: E0921 21:57:32.874796    2180 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:10.672327  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:33 kubernetes-upgrade-20220921215522-10174 kubelet[2195]: E0921 21:57:33.642006    2195 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:10.672720  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:34 kubernetes-upgrade-20220921215522-10174 kubelet[2208]: E0921 21:57:34.374657    2208 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:10.673110  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:35 kubernetes-upgrade-20220921215522-10174 kubelet[2222]: E0921 21:57:35.147067    2222 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:10.673504  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:35 kubernetes-upgrade-20220921215522-10174 kubelet[2235]: E0921 21:57:35.879490    2235 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:10.673893  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:36 kubernetes-upgrade-20220921215522-10174 kubelet[2250]: E0921 21:57:36.640238    2250 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:10.674282  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:37 kubernetes-upgrade-20220921215522-10174 kubelet[2263]: E0921 21:57:37.374557    2263 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:10.674678  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:38 kubernetes-upgrade-20220921215522-10174 kubelet[2278]: E0921 21:57:38.141807    2278 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:10.675065  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:38 kubernetes-upgrade-20220921215522-10174 kubelet[2290]: E0921 21:57:38.876371    2290 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:10.675464  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:39 kubernetes-upgrade-20220921215522-10174 kubelet[2436]: E0921 21:57:39.640823    2436 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:10.675898  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:40 kubernetes-upgrade-20220921215522-10174 kubelet[2448]: E0921 21:57:40.380285    2448 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:10.676285  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:41 kubernetes-upgrade-20220921215522-10174 kubelet[2460]: E0921 21:57:41.157225    2460 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:10.676708  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:41 kubernetes-upgrade-20220921215522-10174 kubelet[2471]: E0921 21:57:41.891401    2471 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:10.677234  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:42 kubernetes-upgrade-20220921215522-10174 kubelet[2482]: E0921 21:57:42.649592    2482 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:10.677705  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:43 kubernetes-upgrade-20220921215522-10174 kubelet[2492]: E0921 21:57:43.370917    2492 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:10.678149  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:44 kubernetes-upgrade-20220921215522-10174 kubelet[2503]: E0921 21:57:44.139479    2503 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:10.678587  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:44 kubernetes-upgrade-20220921215522-10174 kubelet[2514]: E0921 21:57:44.889947    2514 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:10.679004  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:45 kubernetes-upgrade-20220921215522-10174 kubelet[2524]: E0921 21:57:45.634469    2524 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:10.679516  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:46 kubernetes-upgrade-20220921215522-10174 kubelet[2536]: E0921 21:57:46.371547    2536 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:10.680040  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:47 kubernetes-upgrade-20220921215522-10174 kubelet[2547]: E0921 21:57:47.146381    2547 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:10.680497  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:47 kubernetes-upgrade-20220921215522-10174 kubelet[2558]: E0921 21:57:47.871171    2558 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:10.680942  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:48 kubernetes-upgrade-20220921215522-10174 kubelet[2569]: E0921 21:57:48.647840    2569 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:10.681398  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:49 kubernetes-upgrade-20220921215522-10174 kubelet[2579]: E0921 21:57:49.373797    2579 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:10.681829  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:50 kubernetes-upgrade-20220921215522-10174 kubelet[2729]: E0921 21:57:50.127167    2729 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:10.682256  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:50 kubernetes-upgrade-20220921215522-10174 kubelet[2739]: E0921 21:57:50.874508    2739 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:10.682678  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:51 kubernetes-upgrade-20220921215522-10174 kubelet[2750]: E0921 21:57:51.642311    2750 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:10.683099  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:52 kubernetes-upgrade-20220921215522-10174 kubelet[2760]: E0921 21:57:52.371286    2760 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:10.683540  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:53 kubernetes-upgrade-20220921215522-10174 kubelet[2772]: E0921 21:57:53.128949    2772 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:10.684084  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:53 kubernetes-upgrade-20220921215522-10174 kubelet[2782]: E0921 21:57:53.875177    2782 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:10.684556  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:54 kubernetes-upgrade-20220921215522-10174 kubelet[2792]: E0921 21:57:54.628094    2792 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:10.684987  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:55 kubernetes-upgrade-20220921215522-10174 kubelet[2803]: E0921 21:57:55.390864    2803 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:10.685414  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:56 kubernetes-upgrade-20220921215522-10174 kubelet[2813]: E0921 21:57:56.138948    2813 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:10.685850  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:56 kubernetes-upgrade-20220921215522-10174 kubelet[2823]: E0921 21:57:56.875520    2823 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:10.686279  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:57 kubernetes-upgrade-20220921215522-10174 kubelet[2834]: E0921 21:57:57.636715    2834 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:10.686696  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:58 kubernetes-upgrade-20220921215522-10174 kubelet[2844]: E0921 21:57:58.375773    2844 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:10.687144  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:59 kubernetes-upgrade-20220921215522-10174 kubelet[2855]: E0921 21:57:59.130098    2855 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:10.687615  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:59 kubernetes-upgrade-20220921215522-10174 kubelet[2868]: E0921 21:57:59.873878    2868 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:10.688097  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:00 kubernetes-upgrade-20220921215522-10174 kubelet[3015]: E0921 21:58:00.621495    3015 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:10.688558  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:01 kubernetes-upgrade-20220921215522-10174 kubelet[3026]: E0921 21:58:01.373196    3026 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:10.689049  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:02 kubernetes-upgrade-20220921215522-10174 kubelet[3037]: E0921 21:58:02.136352    3037 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:10.689528  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:02 kubernetes-upgrade-20220921215522-10174 kubelet[3049]: E0921 21:58:02.882321    3049 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:10.689993  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:03 kubernetes-upgrade-20220921215522-10174 kubelet[3060]: E0921 21:58:03.625330    3060 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:10.690434  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:04 kubernetes-upgrade-20220921215522-10174 kubelet[3072]: E0921 21:58:04.375601    3072 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:10.690857  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:05 kubernetes-upgrade-20220921215522-10174 kubelet[3083]: E0921 21:58:05.130018    3083 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:10.691277  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:05 kubernetes-upgrade-20220921215522-10174 kubelet[3094]: E0921 21:58:05.880978    3094 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:10.691731  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:06 kubernetes-upgrade-20220921215522-10174 kubelet[3105]: E0921 21:58:06.623228    3105 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:10.692176  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:07 kubernetes-upgrade-20220921215522-10174 kubelet[3118]: E0921 21:58:07.381913    3118 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:10.692601  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:08 kubernetes-upgrade-20220921215522-10174 kubelet[3130]: E0921 21:58:08.130002    3130 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:10.693032  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:08 kubernetes-upgrade-20220921215522-10174 kubelet[3141]: E0921 21:58:08.875549    3141 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:10.693498  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:09 kubernetes-upgrade-20220921215522-10174 kubelet[3151]: E0921 21:58:09.622166    3151 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:10.693960  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:10 kubernetes-upgrade-20220921215522-10174 kubelet[3164]: E0921 21:58:10.371341    3164 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0921 21:58:10.694155  163433 logs.go:123] Gathering logs for dmesg ...
	I0921 21:58:10.694173  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0921 21:58:10.712179  163433 logs.go:123] Gathering logs for describe nodes ...
	I0921 21:58:10.712211  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0921 21:58:10.771703  163433 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0921 21:58:10.771772  163433 out.go:309] Setting ErrFile to fd 2...
	I0921 21:58:10.771801  163433 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0921 21:58:10.771943  163433 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0921 21:58:10.771966  163433 out.go:239]   Sep 21 21:58:07 kubernetes-upgrade-20220921215522-10174 kubelet[3118]: E0921 21:58:07.381913    3118 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Sep 21 21:58:07 kubernetes-upgrade-20220921215522-10174 kubelet[3118]: E0921 21:58:07.381913    3118 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:10.771980  163433 out.go:239]   Sep 21 21:58:08 kubernetes-upgrade-20220921215522-10174 kubelet[3130]: E0921 21:58:08.130002    3130 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Sep 21 21:58:08 kubernetes-upgrade-20220921215522-10174 kubelet[3130]: E0921 21:58:08.130002    3130 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:10.771992  163433 out.go:239]   Sep 21 21:58:08 kubernetes-upgrade-20220921215522-10174 kubelet[3141]: E0921 21:58:08.875549    3141 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Sep 21 21:58:08 kubernetes-upgrade-20220921215522-10174 kubelet[3141]: E0921 21:58:08.875549    3141 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:10.772002  163433 out.go:239]   Sep 21 21:58:09 kubernetes-upgrade-20220921215522-10174 kubelet[3151]: E0921 21:58:09.622166    3151 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Sep 21 21:58:09 kubernetes-upgrade-20220921215522-10174 kubelet[3151]: E0921 21:58:09.622166    3151 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:10.772014  163433 out.go:239]   Sep 21 21:58:10 kubernetes-upgrade-20220921215522-10174 kubelet[3164]: E0921 21:58:10.371341    3164 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Sep 21 21:58:10 kubernetes-upgrade-20220921215522-10174 kubelet[3164]: E0921 21:58:10.371341    3164 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0921 21:58:10.772024  163433 out.go:309] Setting ErrFile to fd 2...
	I0921 21:58:10.772031  163433 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 21:58:20.773183  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:58:20.863377  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0921 21:58:20.863453  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0921 21:58:20.892562  163433 cri.go:87] found id: ""
	I0921 21:58:20.892588  163433 logs.go:274] 0 containers: []
	W0921 21:58:20.892596  163433 logs.go:276] No container was found matching "kube-apiserver"
	I0921 21:58:20.892605  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0921 21:58:20.892662  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0921 21:58:20.921367  163433 cri.go:87] found id: ""
	I0921 21:58:20.921397  163433 logs.go:274] 0 containers: []
	W0921 21:58:20.921406  163433 logs.go:276] No container was found matching "etcd"
	I0921 21:58:20.921414  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0921 21:58:20.921466  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0921 21:58:20.953836  163433 cri.go:87] found id: ""
	I0921 21:58:20.953863  163433 logs.go:274] 0 containers: []
	W0921 21:58:20.953869  163433 logs.go:276] No container was found matching "coredns"
	I0921 21:58:20.953875  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0921 21:58:20.953921  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0921 21:58:21.048920  163433 cri.go:87] found id: ""
	I0921 21:58:21.048955  163433 logs.go:274] 0 containers: []
	W0921 21:58:21.048961  163433 logs.go:276] No container was found matching "kube-scheduler"
	I0921 21:58:21.048967  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0921 21:58:21.049006  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0921 21:58:21.085190  163433 cri.go:87] found id: ""
	I0921 21:58:21.085216  163433 logs.go:274] 0 containers: []
	W0921 21:58:21.085224  163433 logs.go:276] No container was found matching "kube-proxy"
	I0921 21:58:21.085232  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0921 21:58:21.085286  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0921 21:58:21.125244  163433 cri.go:87] found id: ""
	I0921 21:58:21.125266  163433 logs.go:274] 0 containers: []
	W0921 21:58:21.125274  163433 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0921 21:58:21.125282  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0921 21:58:21.125328  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0921 21:58:21.152179  163433 cri.go:87] found id: ""
	I0921 21:58:21.152205  163433 logs.go:274] 0 containers: []
	W0921 21:58:21.152214  163433 logs.go:276] No container was found matching "storage-provisioner"
	I0921 21:58:21.152222  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0921 21:58:21.152274  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0921 21:58:21.237893  163433 cri.go:87] found id: ""
	I0921 21:58:21.237922  163433 logs.go:274] 0 containers: []
	W0921 21:58:21.237931  163433 logs.go:276] No container was found matching "kube-controller-manager"
	I0921 21:58:21.237942  163433 logs.go:123] Gathering logs for kubelet ...
	I0921 21:58:21.237956  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0921 21:58:21.255938  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:31 kubernetes-upgrade-20220921215522-10174 kubelet[2153]: E0921 21:57:31.387353    2153 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:21.256376  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:32 kubernetes-upgrade-20220921215522-10174 kubelet[2167]: E0921 21:57:32.141004    2167 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:21.256772  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:32 kubernetes-upgrade-20220921215522-10174 kubelet[2180]: E0921 21:57:32.874796    2180 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:21.257173  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:33 kubernetes-upgrade-20220921215522-10174 kubelet[2195]: E0921 21:57:33.642006    2195 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:21.257568  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:34 kubernetes-upgrade-20220921215522-10174 kubelet[2208]: E0921 21:57:34.374657    2208 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:21.257966  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:35 kubernetes-upgrade-20220921215522-10174 kubelet[2222]: E0921 21:57:35.147067    2222 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:21.258397  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:35 kubernetes-upgrade-20220921215522-10174 kubelet[2235]: E0921 21:57:35.879490    2235 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:21.258892  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:36 kubernetes-upgrade-20220921215522-10174 kubelet[2250]: E0921 21:57:36.640238    2250 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:21.259328  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:37 kubernetes-upgrade-20220921215522-10174 kubelet[2263]: E0921 21:57:37.374557    2263 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:21.259761  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:38 kubernetes-upgrade-20220921215522-10174 kubelet[2278]: E0921 21:57:38.141807    2278 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:21.260160  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:38 kubernetes-upgrade-20220921215522-10174 kubelet[2290]: E0921 21:57:38.876371    2290 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:21.260597  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:39 kubernetes-upgrade-20220921215522-10174 kubelet[2436]: E0921 21:57:39.640823    2436 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:21.260999  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:40 kubernetes-upgrade-20220921215522-10174 kubelet[2448]: E0921 21:57:40.380285    2448 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:21.261412  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:41 kubernetes-upgrade-20220921215522-10174 kubelet[2460]: E0921 21:57:41.157225    2460 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:21.261808  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:41 kubernetes-upgrade-20220921215522-10174 kubelet[2471]: E0921 21:57:41.891401    2471 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:21.262191  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:42 kubernetes-upgrade-20220921215522-10174 kubelet[2482]: E0921 21:57:42.649592    2482 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:21.262585  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:43 kubernetes-upgrade-20220921215522-10174 kubelet[2492]: E0921 21:57:43.370917    2492 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:21.262967  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:44 kubernetes-upgrade-20220921215522-10174 kubelet[2503]: E0921 21:57:44.139479    2503 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:21.263341  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:44 kubernetes-upgrade-20220921215522-10174 kubelet[2514]: E0921 21:57:44.889947    2514 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:21.263802  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:45 kubernetes-upgrade-20220921215522-10174 kubelet[2524]: E0921 21:57:45.634469    2524 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:21.264194  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:46 kubernetes-upgrade-20220921215522-10174 kubelet[2536]: E0921 21:57:46.371547    2536 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:21.264734  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:47 kubernetes-upgrade-20220921215522-10174 kubelet[2547]: E0921 21:57:47.146381    2547 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:21.265220  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:47 kubernetes-upgrade-20220921215522-10174 kubelet[2558]: E0921 21:57:47.871171    2558 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:21.265686  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:48 kubernetes-upgrade-20220921215522-10174 kubelet[2569]: E0921 21:57:48.647840    2569 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:21.266116  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:49 kubernetes-upgrade-20220921215522-10174 kubelet[2579]: E0921 21:57:49.373797    2579 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:21.266520  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:50 kubernetes-upgrade-20220921215522-10174 kubelet[2729]: E0921 21:57:50.127167    2729 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:21.266944  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:50 kubernetes-upgrade-20220921215522-10174 kubelet[2739]: E0921 21:57:50.874508    2739 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:21.267349  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:51 kubernetes-upgrade-20220921215522-10174 kubelet[2750]: E0921 21:57:51.642311    2750 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:21.267761  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:52 kubernetes-upgrade-20220921215522-10174 kubelet[2760]: E0921 21:57:52.371286    2760 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:21.268160  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:53 kubernetes-upgrade-20220921215522-10174 kubelet[2772]: E0921 21:57:53.128949    2772 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:21.268563  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:53 kubernetes-upgrade-20220921215522-10174 kubelet[2782]: E0921 21:57:53.875177    2782 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:21.268961  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:54 kubernetes-upgrade-20220921215522-10174 kubelet[2792]: E0921 21:57:54.628094    2792 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:21.269375  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:55 kubernetes-upgrade-20220921215522-10174 kubelet[2803]: E0921 21:57:55.390864    2803 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:21.269771  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:56 kubernetes-upgrade-20220921215522-10174 kubelet[2813]: E0921 21:57:56.138948    2813 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:21.270186  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:56 kubernetes-upgrade-20220921215522-10174 kubelet[2823]: E0921 21:57:56.875520    2823 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:21.270603  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:57 kubernetes-upgrade-20220921215522-10174 kubelet[2834]: E0921 21:57:57.636715    2834 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:21.271006  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:58 kubernetes-upgrade-20220921215522-10174 kubelet[2844]: E0921 21:57:58.375773    2844 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:21.271516  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:59 kubernetes-upgrade-20220921215522-10174 kubelet[2855]: E0921 21:57:59.130098    2855 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:21.272015  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:59 kubernetes-upgrade-20220921215522-10174 kubelet[2868]: E0921 21:57:59.873878    2868 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:21.272451  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:00 kubernetes-upgrade-20220921215522-10174 kubelet[3015]: E0921 21:58:00.621495    3015 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:21.272887  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:01 kubernetes-upgrade-20220921215522-10174 kubelet[3026]: E0921 21:58:01.373196    3026 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:21.273291  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:02 kubernetes-upgrade-20220921215522-10174 kubelet[3037]: E0921 21:58:02.136352    3037 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:21.273737  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:02 kubernetes-upgrade-20220921215522-10174 kubelet[3049]: E0921 21:58:02.882321    3049 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:21.274371  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:03 kubernetes-upgrade-20220921215522-10174 kubelet[3060]: E0921 21:58:03.625330    3060 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:21.274979  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:04 kubernetes-upgrade-20220921215522-10174 kubelet[3072]: E0921 21:58:04.375601    3072 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:21.275439  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:05 kubernetes-upgrade-20220921215522-10174 kubelet[3083]: E0921 21:58:05.130018    3083 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:21.275905  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:05 kubernetes-upgrade-20220921215522-10174 kubelet[3094]: E0921 21:58:05.880978    3094 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:21.276359  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:06 kubernetes-upgrade-20220921215522-10174 kubelet[3105]: E0921 21:58:06.623228    3105 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:21.276764  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:07 kubernetes-upgrade-20220921215522-10174 kubelet[3118]: E0921 21:58:07.381913    3118 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:21.277183  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:08 kubernetes-upgrade-20220921215522-10174 kubelet[3130]: E0921 21:58:08.130002    3130 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:21.277592  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:08 kubernetes-upgrade-20220921215522-10174 kubelet[3141]: E0921 21:58:08.875549    3141 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:21.278030  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:09 kubernetes-upgrade-20220921215522-10174 kubelet[3151]: E0921 21:58:09.622166    3151 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:21.278446  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:10 kubernetes-upgrade-20220921215522-10174 kubelet[3164]: E0921 21:58:10.371341    3164 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:21.278873  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:11 kubernetes-upgrade-20220921215522-10174 kubelet[3306]: E0921 21:58:11.134783    3306 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:21.279345  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:11 kubernetes-upgrade-20220921215522-10174 kubelet[3318]: E0921 21:58:11.875201    3318 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:21.279747  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:12 kubernetes-upgrade-20220921215522-10174 kubelet[3329]: E0921 21:58:12.632869    3329 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:21.280134  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:13 kubernetes-upgrade-20220921215522-10174 kubelet[3341]: E0921 21:58:13.374875    3341 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:21.280521  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:14 kubernetes-upgrade-20220921215522-10174 kubelet[3353]: E0921 21:58:14.123030    3353 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:21.280910  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:14 kubernetes-upgrade-20220921215522-10174 kubelet[3364]: E0921 21:58:14.876732    3364 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:21.281307  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:15 kubernetes-upgrade-20220921215522-10174 kubelet[3374]: E0921 21:58:15.624588    3374 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:21.281773  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:16 kubernetes-upgrade-20220921215522-10174 kubelet[3385]: E0921 21:58:16.371393    3385 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:21.282157  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:17 kubernetes-upgrade-20220921215522-10174 kubelet[3396]: E0921 21:58:17.137955    3396 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:21.282572  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:17 kubernetes-upgrade-20220921215522-10174 kubelet[3407]: E0921 21:58:17.890930    3407 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:21.282960  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:18 kubernetes-upgrade-20220921215522-10174 kubelet[3417]: E0921 21:58:18.632034    3417 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:21.283345  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:19 kubernetes-upgrade-20220921215522-10174 kubelet[3428]: E0921 21:58:19.375785    3428 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:21.283764  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:20 kubernetes-upgrade-20220921215522-10174 kubelet[3439]: E0921 21:58:20.129184    3439 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:21.284178  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:20 kubernetes-upgrade-20220921215522-10174 kubelet[3451]: E0921 21:58:20.880762    3451 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0921 21:58:21.284338  163433 logs.go:123] Gathering logs for dmesg ...
	I0921 21:58:21.284357  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0921 21:58:21.299587  163433 logs.go:123] Gathering logs for describe nodes ...
	I0921 21:58:21.299614  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0921 21:58:21.360540  163433 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0921 21:58:21.360565  163433 logs.go:123] Gathering logs for containerd ...
	I0921 21:58:21.360579  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0921 21:58:21.397491  163433 logs.go:123] Gathering logs for container status ...
	I0921 21:58:21.397523  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0921 21:58:21.427754  163433 out.go:309] Setting ErrFile to fd 2...
	I0921 21:58:21.427779  163433 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0921 21:58:21.427901  163433 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0921 21:58:21.427930  163433 out.go:239]   Sep 21 21:58:17 kubernetes-upgrade-20220921215522-10174 kubelet[3407]: E0921 21:58:17.890930    3407 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Sep 21 21:58:17 kubernetes-upgrade-20220921215522-10174 kubelet[3407]: E0921 21:58:17.890930    3407 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:21.427938  163433 out.go:239]   Sep 21 21:58:18 kubernetes-upgrade-20220921215522-10174 kubelet[3417]: E0921 21:58:18.632034    3417 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Sep 21 21:58:18 kubernetes-upgrade-20220921215522-10174 kubelet[3417]: E0921 21:58:18.632034    3417 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:21.427945  163433 out.go:239]   Sep 21 21:58:19 kubernetes-upgrade-20220921215522-10174 kubelet[3428]: E0921 21:58:19.375785    3428 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Sep 21 21:58:19 kubernetes-upgrade-20220921215522-10174 kubelet[3428]: E0921 21:58:19.375785    3428 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:21.427953  163433 out.go:239]   Sep 21 21:58:20 kubernetes-upgrade-20220921215522-10174 kubelet[3439]: E0921 21:58:20.129184    3439 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Sep 21 21:58:20 kubernetes-upgrade-20220921215522-10174 kubelet[3439]: E0921 21:58:20.129184    3439 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:21.427963  163433 out.go:239]   Sep 21 21:58:20 kubernetes-upgrade-20220921215522-10174 kubelet[3451]: E0921 21:58:20.880762    3451 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Sep 21 21:58:20 kubernetes-upgrade-20220921215522-10174 kubelet[3451]: E0921 21:58:20.880762    3451 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0921 21:58:21.427969  163433 out.go:309] Setting ErrFile to fd 2...
	I0921 21:58:21.427980  163433 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 21:58:31.430050  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:58:31.863473  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0921 21:58:31.863538  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0921 21:58:31.888162  163433 cri.go:87] found id: ""
	I0921 21:58:31.888184  163433 logs.go:274] 0 containers: []
	W0921 21:58:31.888190  163433 logs.go:276] No container was found matching "kube-apiserver"
	I0921 21:58:31.888195  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0921 21:58:31.888242  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0921 21:58:31.912569  163433 cri.go:87] found id: ""
	I0921 21:58:31.912597  163433 logs.go:274] 0 containers: []
	W0921 21:58:31.912606  163433 logs.go:276] No container was found matching "etcd"
	I0921 21:58:31.912622  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0921 21:58:31.912678  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0921 21:58:31.937077  163433 cri.go:87] found id: ""
	I0921 21:58:31.937101  163433 logs.go:274] 0 containers: []
	W0921 21:58:31.937125  163433 logs.go:276] No container was found matching "coredns"
	I0921 21:58:31.937133  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0921 21:58:31.937187  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0921 21:58:31.961115  163433 cri.go:87] found id: ""
	I0921 21:58:31.961141  163433 logs.go:274] 0 containers: []
	W0921 21:58:31.961150  163433 logs.go:276] No container was found matching "kube-scheduler"
	I0921 21:58:31.961157  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0921 21:58:31.961220  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0921 21:58:31.984291  163433 cri.go:87] found id: ""
	I0921 21:58:31.984313  163433 logs.go:274] 0 containers: []
	W0921 21:58:31.984319  163433 logs.go:276] No container was found matching "kube-proxy"
	I0921 21:58:31.984325  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0921 21:58:31.984374  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0921 21:58:32.007119  163433 cri.go:87] found id: ""
	I0921 21:58:32.007149  163433 logs.go:274] 0 containers: []
	W0921 21:58:32.007158  163433 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0921 21:58:32.007166  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0921 21:58:32.007227  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0921 21:58:32.030492  163433 cri.go:87] found id: ""
	I0921 21:58:32.030525  163433 logs.go:274] 0 containers: []
	W0921 21:58:32.030534  163433 logs.go:276] No container was found matching "storage-provisioner"
	I0921 21:58:32.030543  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0921 21:58:32.030589  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0921 21:58:32.053566  163433 cri.go:87] found id: ""
	I0921 21:58:32.053589  163433 logs.go:274] 0 containers: []
	W0921 21:58:32.053597  163433 logs.go:276] No container was found matching "kube-controller-manager"
	I0921 21:58:32.053610  163433 logs.go:123] Gathering logs for containerd ...
	I0921 21:58:32.053623  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0921 21:58:32.098622  163433 logs.go:123] Gathering logs for container status ...
	I0921 21:58:32.098659  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0921 21:58:32.127496  163433 logs.go:123] Gathering logs for kubelet ...
	I0921 21:58:32.127535  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0921 21:58:32.145188  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:42 kubernetes-upgrade-20220921215522-10174 kubelet[2482]: E0921 21:57:42.649592    2482 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:32.145597  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:43 kubernetes-upgrade-20220921215522-10174 kubelet[2492]: E0921 21:57:43.370917    2492 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:32.146009  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:44 kubernetes-upgrade-20220921215522-10174 kubelet[2503]: E0921 21:57:44.139479    2503 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:32.146402  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:44 kubernetes-upgrade-20220921215522-10174 kubelet[2514]: E0921 21:57:44.889947    2514 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:32.146792  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:45 kubernetes-upgrade-20220921215522-10174 kubelet[2524]: E0921 21:57:45.634469    2524 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:32.147190  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:46 kubernetes-upgrade-20220921215522-10174 kubelet[2536]: E0921 21:57:46.371547    2536 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:32.147585  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:47 kubernetes-upgrade-20220921215522-10174 kubelet[2547]: E0921 21:57:47.146381    2547 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:32.148009  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:47 kubernetes-upgrade-20220921215522-10174 kubelet[2558]: E0921 21:57:47.871171    2558 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:32.148408  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:48 kubernetes-upgrade-20220921215522-10174 kubelet[2569]: E0921 21:57:48.647840    2569 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:32.148800  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:49 kubernetes-upgrade-20220921215522-10174 kubelet[2579]: E0921 21:57:49.373797    2579 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:32.149205  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:50 kubernetes-upgrade-20220921215522-10174 kubelet[2729]: E0921 21:57:50.127167    2729 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:32.149616  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:50 kubernetes-upgrade-20220921215522-10174 kubelet[2739]: E0921 21:57:50.874508    2739 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:32.150029  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:51 kubernetes-upgrade-20220921215522-10174 kubelet[2750]: E0921 21:57:51.642311    2750 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:32.150421  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:52 kubernetes-upgrade-20220921215522-10174 kubelet[2760]: E0921 21:57:52.371286    2760 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:32.150832  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:53 kubernetes-upgrade-20220921215522-10174 kubelet[2772]: E0921 21:57:53.128949    2772 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:32.151226  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:53 kubernetes-upgrade-20220921215522-10174 kubelet[2782]: E0921 21:57:53.875177    2782 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:32.151618  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:54 kubernetes-upgrade-20220921215522-10174 kubelet[2792]: E0921 21:57:54.628094    2792 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:32.152030  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:55 kubernetes-upgrade-20220921215522-10174 kubelet[2803]: E0921 21:57:55.390864    2803 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:32.152433  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:56 kubernetes-upgrade-20220921215522-10174 kubelet[2813]: E0921 21:57:56.138948    2813 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:32.152829  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:56 kubernetes-upgrade-20220921215522-10174 kubelet[2823]: E0921 21:57:56.875520    2823 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:32.153228  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:57 kubernetes-upgrade-20220921215522-10174 kubelet[2834]: E0921 21:57:57.636715    2834 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:32.153644  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:58 kubernetes-upgrade-20220921215522-10174 kubelet[2844]: E0921 21:57:58.375773    2844 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:32.154041  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:59 kubernetes-upgrade-20220921215522-10174 kubelet[2855]: E0921 21:57:59.130098    2855 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:32.154440  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:59 kubernetes-upgrade-20220921215522-10174 kubelet[2868]: E0921 21:57:59.873878    2868 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:32.154840  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:00 kubernetes-upgrade-20220921215522-10174 kubelet[3015]: E0921 21:58:00.621495    3015 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:32.155238  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:01 kubernetes-upgrade-20220921215522-10174 kubelet[3026]: E0921 21:58:01.373196    3026 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:32.155635  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:02 kubernetes-upgrade-20220921215522-10174 kubelet[3037]: E0921 21:58:02.136352    3037 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:32.156066  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:02 kubernetes-upgrade-20220921215522-10174 kubelet[3049]: E0921 21:58:02.882321    3049 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:32.156460  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:03 kubernetes-upgrade-20220921215522-10174 kubelet[3060]: E0921 21:58:03.625330    3060 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:32.156845  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:04 kubernetes-upgrade-20220921215522-10174 kubelet[3072]: E0921 21:58:04.375601    3072 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:32.157254  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:05 kubernetes-upgrade-20220921215522-10174 kubelet[3083]: E0921 21:58:05.130018    3083 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:32.157653  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:05 kubernetes-upgrade-20220921215522-10174 kubelet[3094]: E0921 21:58:05.880978    3094 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:32.158046  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:06 kubernetes-upgrade-20220921215522-10174 kubelet[3105]: E0921 21:58:06.623228    3105 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:32.158484  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:07 kubernetes-upgrade-20220921215522-10174 kubelet[3118]: E0921 21:58:07.381913    3118 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:32.158872  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:08 kubernetes-upgrade-20220921215522-10174 kubelet[3130]: E0921 21:58:08.130002    3130 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:32.159270  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:08 kubernetes-upgrade-20220921215522-10174 kubelet[3141]: E0921 21:58:08.875549    3141 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:32.159658  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:09 kubernetes-upgrade-20220921215522-10174 kubelet[3151]: E0921 21:58:09.622166    3151 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:32.160073  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:10 kubernetes-upgrade-20220921215522-10174 kubelet[3164]: E0921 21:58:10.371341    3164 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:32.160476  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:11 kubernetes-upgrade-20220921215522-10174 kubelet[3306]: E0921 21:58:11.134783    3306 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:32.160870  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:11 kubernetes-upgrade-20220921215522-10174 kubelet[3318]: E0921 21:58:11.875201    3318 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:32.161267  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:12 kubernetes-upgrade-20220921215522-10174 kubelet[3329]: E0921 21:58:12.632869    3329 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:32.161660  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:13 kubernetes-upgrade-20220921215522-10174 kubelet[3341]: E0921 21:58:13.374875    3341 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:32.162080  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:14 kubernetes-upgrade-20220921215522-10174 kubelet[3353]: E0921 21:58:14.123030    3353 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:32.162478  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:14 kubernetes-upgrade-20220921215522-10174 kubelet[3364]: E0921 21:58:14.876732    3364 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:32.162872  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:15 kubernetes-upgrade-20220921215522-10174 kubelet[3374]: E0921 21:58:15.624588    3374 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:32.163270  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:16 kubernetes-upgrade-20220921215522-10174 kubelet[3385]: E0921 21:58:16.371393    3385 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:32.163666  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:17 kubernetes-upgrade-20220921215522-10174 kubelet[3396]: E0921 21:58:17.137955    3396 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:32.164087  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:17 kubernetes-upgrade-20220921215522-10174 kubelet[3407]: E0921 21:58:17.890930    3407 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:32.164481  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:18 kubernetes-upgrade-20220921215522-10174 kubelet[3417]: E0921 21:58:18.632034    3417 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:32.164876  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:19 kubernetes-upgrade-20220921215522-10174 kubelet[3428]: E0921 21:58:19.375785    3428 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:32.165276  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:20 kubernetes-upgrade-20220921215522-10174 kubelet[3439]: E0921 21:58:20.129184    3439 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:32.165667  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:20 kubernetes-upgrade-20220921215522-10174 kubelet[3451]: E0921 21:58:20.880762    3451 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:32.166060  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:21 kubernetes-upgrade-20220921215522-10174 kubelet[3597]: E0921 21:58:21.630652    3597 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:32.166455  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:22 kubernetes-upgrade-20220921215522-10174 kubelet[3607]: E0921 21:58:22.370329    3607 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:32.166863  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:23 kubernetes-upgrade-20220921215522-10174 kubelet[3618]: E0921 21:58:23.123363    3618 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:32.167267  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:23 kubernetes-upgrade-20220921215522-10174 kubelet[3629]: E0921 21:58:23.871937    3629 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:32.167666  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:24 kubernetes-upgrade-20220921215522-10174 kubelet[3640]: E0921 21:58:24.623599    3640 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:32.168082  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:25 kubernetes-upgrade-20220921215522-10174 kubelet[3651]: E0921 21:58:25.373329    3651 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:32.168479  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:26 kubernetes-upgrade-20220921215522-10174 kubelet[3664]: E0921 21:58:26.122744    3664 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:32.168864  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:26 kubernetes-upgrade-20220921215522-10174 kubelet[3675]: E0921 21:58:26.871823    3675 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:32.169268  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:27 kubernetes-upgrade-20220921215522-10174 kubelet[3686]: E0921 21:58:27.622585    3686 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:32.169672  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:28 kubernetes-upgrade-20220921215522-10174 kubelet[3697]: E0921 21:58:28.376468    3697 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:32.170076  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:29 kubernetes-upgrade-20220921215522-10174 kubelet[3708]: E0921 21:58:29.132047    3708 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:32.170479  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:29 kubernetes-upgrade-20220921215522-10174 kubelet[3719]: E0921 21:58:29.877817    3719 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:32.170907  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:30 kubernetes-upgrade-20220921215522-10174 kubelet[3730]: E0921 21:58:30.636585    3730 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:32.171314  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:31 kubernetes-upgrade-20220921215522-10174 kubelet[3740]: E0921 21:58:31.375404    3740 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:32.171781  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:32 kubernetes-upgrade-20220921215522-10174 kubelet[3854]: E0921 21:58:32.129853    3854 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0921 21:58:32.171915  163433 logs.go:123] Gathering logs for dmesg ...
	I0921 21:58:32.171930  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0921 21:58:32.186117  163433 logs.go:123] Gathering logs for describe nodes ...
	I0921 21:58:32.186146  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0921 21:58:32.240672  163433 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0921 21:58:32.240699  163433 out.go:309] Setting ErrFile to fd 2...
	I0921 21:58:32.240713  163433 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0921 21:58:32.240825  163433 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0921 21:58:32.240839  163433 out.go:239]   Sep 21 21:58:29 kubernetes-upgrade-20220921215522-10174 kubelet[3708]: E0921 21:58:29.132047    3708 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Sep 21 21:58:29 kubernetes-upgrade-20220921215522-10174 kubelet[3708]: E0921 21:58:29.132047    3708 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:32.240846  163433 out.go:239]   Sep 21 21:58:29 kubernetes-upgrade-20220921215522-10174 kubelet[3719]: E0921 21:58:29.877817    3719 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Sep 21 21:58:29 kubernetes-upgrade-20220921215522-10174 kubelet[3719]: E0921 21:58:29.877817    3719 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:32.240854  163433 out.go:239]   Sep 21 21:58:30 kubernetes-upgrade-20220921215522-10174 kubelet[3730]: E0921 21:58:30.636585    3730 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Sep 21 21:58:30 kubernetes-upgrade-20220921215522-10174 kubelet[3730]: E0921 21:58:30.636585    3730 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:32.240862  163433 out.go:239]   Sep 21 21:58:31 kubernetes-upgrade-20220921215522-10174 kubelet[3740]: E0921 21:58:31.375404    3740 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Sep 21 21:58:31 kubernetes-upgrade-20220921215522-10174 kubelet[3740]: E0921 21:58:31.375404    3740 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:32.240874  163433 out.go:239]   Sep 21 21:58:32 kubernetes-upgrade-20220921215522-10174 kubelet[3854]: E0921 21:58:32.129853    3854 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Sep 21 21:58:32 kubernetes-upgrade-20220921215522-10174 kubelet[3854]: E0921 21:58:32.129853    3854 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0921 21:58:32.240883  163433 out.go:309] Setting ErrFile to fd 2...
	I0921 21:58:32.240892  163433 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 21:58:42.241531  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:58:42.363477  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0921 21:58:42.363559  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0921 21:58:42.387547  163433 cri.go:87] found id: ""
	I0921 21:58:42.387571  163433 logs.go:274] 0 containers: []
	W0921 21:58:42.387578  163433 logs.go:276] No container was found matching "kube-apiserver"
	I0921 21:58:42.387585  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0921 21:58:42.387639  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0921 21:58:42.409893  163433 cri.go:87] found id: ""
	I0921 21:58:42.409924  163433 logs.go:274] 0 containers: []
	W0921 21:58:42.409933  163433 logs.go:276] No container was found matching "etcd"
	I0921 21:58:42.409940  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0921 21:58:42.409985  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0921 21:58:42.431936  163433 cri.go:87] found id: ""
	I0921 21:58:42.431959  163433 logs.go:274] 0 containers: []
	W0921 21:58:42.431974  163433 logs.go:276] No container was found matching "coredns"
	I0921 21:58:42.431981  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0921 21:58:42.432034  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0921 21:58:42.454563  163433 cri.go:87] found id: ""
	I0921 21:58:42.454587  163433 logs.go:274] 0 containers: []
	W0921 21:58:42.454593  163433 logs.go:276] No container was found matching "kube-scheduler"
	I0921 21:58:42.454598  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0921 21:58:42.454638  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0921 21:58:42.477095  163433 cri.go:87] found id: ""
	I0921 21:58:42.477120  163433 logs.go:274] 0 containers: []
	W0921 21:58:42.477127  163433 logs.go:276] No container was found matching "kube-proxy"
	I0921 21:58:42.477146  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0921 21:58:42.477202  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0921 21:58:42.501982  163433 cri.go:87] found id: ""
	I0921 21:58:42.502006  163433 logs.go:274] 0 containers: []
	W0921 21:58:42.502012  163433 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0921 21:58:42.502018  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0921 21:58:42.502064  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0921 21:58:42.526071  163433 cri.go:87] found id: ""
	I0921 21:58:42.526102  163433 logs.go:274] 0 containers: []
	W0921 21:58:42.526110  163433 logs.go:276] No container was found matching "storage-provisioner"
	I0921 21:58:42.526119  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0921 21:58:42.526173  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0921 21:58:42.550511  163433 cri.go:87] found id: ""
	I0921 21:58:42.550541  163433 logs.go:274] 0 containers: []
	W0921 21:58:42.550550  163433 logs.go:276] No container was found matching "kube-controller-manager"
	I0921 21:58:42.550562  163433 logs.go:123] Gathering logs for dmesg ...
	I0921 21:58:42.550577  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0921 21:58:42.566357  163433 logs.go:123] Gathering logs for describe nodes ...
	I0921 21:58:42.566389  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0921 21:58:42.630777  163433 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0921 21:58:42.630804  163433 logs.go:123] Gathering logs for containerd ...
	I0921 21:58:42.630816  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0921 21:58:42.665525  163433 logs.go:123] Gathering logs for container status ...
	I0921 21:58:42.665563  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0921 21:58:42.696286  163433 logs.go:123] Gathering logs for kubelet ...
	I0921 21:58:42.696323  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0921 21:58:42.714032  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:53 kubernetes-upgrade-20220921215522-10174 kubelet[2772]: E0921 21:57:53.128949    2772 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:42.714436  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:53 kubernetes-upgrade-20220921215522-10174 kubelet[2782]: E0921 21:57:53.875177    2782 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:42.714827  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:54 kubernetes-upgrade-20220921215522-10174 kubelet[2792]: E0921 21:57:54.628094    2792 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:42.715228  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:55 kubernetes-upgrade-20220921215522-10174 kubelet[2803]: E0921 21:57:55.390864    2803 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:42.715609  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:56 kubernetes-upgrade-20220921215522-10174 kubelet[2813]: E0921 21:57:56.138948    2813 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:42.716027  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:56 kubernetes-upgrade-20220921215522-10174 kubelet[2823]: E0921 21:57:56.875520    2823 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:42.716418  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:57 kubernetes-upgrade-20220921215522-10174 kubelet[2834]: E0921 21:57:57.636715    2834 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:42.716798  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:58 kubernetes-upgrade-20220921215522-10174 kubelet[2844]: E0921 21:57:58.375773    2844 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:42.717186  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:59 kubernetes-upgrade-20220921215522-10174 kubelet[2855]: E0921 21:57:59.130098    2855 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:42.717588  163433 logs.go:138] Found kubelet problem: Sep 21 21:57:59 kubernetes-upgrade-20220921215522-10174 kubelet[2868]: E0921 21:57:59.873878    2868 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:42.717969  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:00 kubernetes-upgrade-20220921215522-10174 kubelet[3015]: E0921 21:58:00.621495    3015 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:42.718352  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:01 kubernetes-upgrade-20220921215522-10174 kubelet[3026]: E0921 21:58:01.373196    3026 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:42.718787  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:02 kubernetes-upgrade-20220921215522-10174 kubelet[3037]: E0921 21:58:02.136352    3037 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:42.719182  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:02 kubernetes-upgrade-20220921215522-10174 kubelet[3049]: E0921 21:58:02.882321    3049 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:42.719557  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:03 kubernetes-upgrade-20220921215522-10174 kubelet[3060]: E0921 21:58:03.625330    3060 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:42.719955  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:04 kubernetes-upgrade-20220921215522-10174 kubelet[3072]: E0921 21:58:04.375601    3072 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:42.720348  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:05 kubernetes-upgrade-20220921215522-10174 kubelet[3083]: E0921 21:58:05.130018    3083 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:42.720748  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:05 kubernetes-upgrade-20220921215522-10174 kubelet[3094]: E0921 21:58:05.880978    3094 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:42.721130  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:06 kubernetes-upgrade-20220921215522-10174 kubelet[3105]: E0921 21:58:06.623228    3105 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:42.721516  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:07 kubernetes-upgrade-20220921215522-10174 kubelet[3118]: E0921 21:58:07.381913    3118 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:42.721937  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:08 kubernetes-upgrade-20220921215522-10174 kubelet[3130]: E0921 21:58:08.130002    3130 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:42.722329  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:08 kubernetes-upgrade-20220921215522-10174 kubelet[3141]: E0921 21:58:08.875549    3141 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:42.722705  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:09 kubernetes-upgrade-20220921215522-10174 kubelet[3151]: E0921 21:58:09.622166    3151 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:42.723095  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:10 kubernetes-upgrade-20220921215522-10174 kubelet[3164]: E0921 21:58:10.371341    3164 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:42.723477  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:11 kubernetes-upgrade-20220921215522-10174 kubelet[3306]: E0921 21:58:11.134783    3306 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:42.723889  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:11 kubernetes-upgrade-20220921215522-10174 kubelet[3318]: E0921 21:58:11.875201    3318 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:42.724274  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:12 kubernetes-upgrade-20220921215522-10174 kubelet[3329]: E0921 21:58:12.632869    3329 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:42.724660  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:13 kubernetes-upgrade-20220921215522-10174 kubelet[3341]: E0921 21:58:13.374875    3341 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:42.725045  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:14 kubernetes-upgrade-20220921215522-10174 kubelet[3353]: E0921 21:58:14.123030    3353 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:42.725433  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:14 kubernetes-upgrade-20220921215522-10174 kubelet[3364]: E0921 21:58:14.876732    3364 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:42.725818  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:15 kubernetes-upgrade-20220921215522-10174 kubelet[3374]: E0921 21:58:15.624588    3374 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:42.726211  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:16 kubernetes-upgrade-20220921215522-10174 kubelet[3385]: E0921 21:58:16.371393    3385 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:42.726594  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:17 kubernetes-upgrade-20220921215522-10174 kubelet[3396]: E0921 21:58:17.137955    3396 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:42.726972  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:17 kubernetes-upgrade-20220921215522-10174 kubelet[3407]: E0921 21:58:17.890930    3407 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:42.727353  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:18 kubernetes-upgrade-20220921215522-10174 kubelet[3417]: E0921 21:58:18.632034    3417 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:42.727760  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:19 kubernetes-upgrade-20220921215522-10174 kubelet[3428]: E0921 21:58:19.375785    3428 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:42.728147  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:20 kubernetes-upgrade-20220921215522-10174 kubelet[3439]: E0921 21:58:20.129184    3439 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:42.728533  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:20 kubernetes-upgrade-20220921215522-10174 kubelet[3451]: E0921 21:58:20.880762    3451 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:42.728909  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:21 kubernetes-upgrade-20220921215522-10174 kubelet[3597]: E0921 21:58:21.630652    3597 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:42.729305  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:22 kubernetes-upgrade-20220921215522-10174 kubelet[3607]: E0921 21:58:22.370329    3607 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:42.729683  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:23 kubernetes-upgrade-20220921215522-10174 kubelet[3618]: E0921 21:58:23.123363    3618 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:42.730060  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:23 kubernetes-upgrade-20220921215522-10174 kubelet[3629]: E0921 21:58:23.871937    3629 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:42.730449  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:24 kubernetes-upgrade-20220921215522-10174 kubelet[3640]: E0921 21:58:24.623599    3640 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:42.730842  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:25 kubernetes-upgrade-20220921215522-10174 kubelet[3651]: E0921 21:58:25.373329    3651 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:42.731234  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:26 kubernetes-upgrade-20220921215522-10174 kubelet[3664]: E0921 21:58:26.122744    3664 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:42.731610  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:26 kubernetes-upgrade-20220921215522-10174 kubelet[3675]: E0921 21:58:26.871823    3675 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:42.732034  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:27 kubernetes-upgrade-20220921215522-10174 kubelet[3686]: E0921 21:58:27.622585    3686 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:42.732425  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:28 kubernetes-upgrade-20220921215522-10174 kubelet[3697]: E0921 21:58:28.376468    3697 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:42.732803  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:29 kubernetes-upgrade-20220921215522-10174 kubelet[3708]: E0921 21:58:29.132047    3708 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:42.733184  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:29 kubernetes-upgrade-20220921215522-10174 kubelet[3719]: E0921 21:58:29.877817    3719 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:42.733562  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:30 kubernetes-upgrade-20220921215522-10174 kubelet[3730]: E0921 21:58:30.636585    3730 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:42.733937  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:31 kubernetes-upgrade-20220921215522-10174 kubelet[3740]: E0921 21:58:31.375404    3740 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:42.734326  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:32 kubernetes-upgrade-20220921215522-10174 kubelet[3854]: E0921 21:58:32.129853    3854 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:42.734711  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:32 kubernetes-upgrade-20220921215522-10174 kubelet[3898]: E0921 21:58:32.873032    3898 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:42.735106  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:33 kubernetes-upgrade-20220921215522-10174 kubelet[3910]: E0921 21:58:33.628639    3910 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:42.735483  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:34 kubernetes-upgrade-20220921215522-10174 kubelet[3921]: E0921 21:58:34.371911    3921 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:42.735920  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:35 kubernetes-upgrade-20220921215522-10174 kubelet[3932]: E0921 21:58:35.133679    3932 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:42.736318  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:35 kubernetes-upgrade-20220921215522-10174 kubelet[3942]: E0921 21:58:35.894996    3942 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:42.736696  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:36 kubernetes-upgrade-20220921215522-10174 kubelet[3952]: E0921 21:58:36.645679    3952 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:42.737112  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:37 kubernetes-upgrade-20220921215522-10174 kubelet[3962]: E0921 21:58:37.382599    3962 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:42.737513  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:38 kubernetes-upgrade-20220921215522-10174 kubelet[3972]: E0921 21:58:38.148834    3972 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:42.737892  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:38 kubernetes-upgrade-20220921215522-10174 kubelet[3982]: E0921 21:58:38.878739    3982 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:42.738275  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:39 kubernetes-upgrade-20220921215522-10174 kubelet[3993]: E0921 21:58:39.634330    3993 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:42.738655  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:40 kubernetes-upgrade-20220921215522-10174 kubelet[4003]: E0921 21:58:40.371691    4003 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:42.739036  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:41 kubernetes-upgrade-20220921215522-10174 kubelet[4015]: E0921 21:58:41.121950    4015 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:42.739421  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:41 kubernetes-upgrade-20220921215522-10174 kubelet[4026]: E0921 21:58:41.871405    4026 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:42.739825  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:42 kubernetes-upgrade-20220921215522-10174 kubelet[4150]: E0921 21:58:42.630550    4150 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0921 21:58:42.739964  163433 out.go:309] Setting ErrFile to fd 2...
	I0921 21:58:42.739978  163433 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0921 21:58:42.740075  163433 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0921 21:58:42.740093  163433 out.go:239]   Sep 21 21:58:39 kubernetes-upgrade-20220921215522-10174 kubelet[3993]: E0921 21:58:39.634330    3993 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Sep 21 21:58:39 kubernetes-upgrade-20220921215522-10174 kubelet[3993]: E0921 21:58:39.634330    3993 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:42.740099  163433 out.go:239]   Sep 21 21:58:40 kubernetes-upgrade-20220921215522-10174 kubelet[4003]: E0921 21:58:40.371691    4003 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Sep 21 21:58:40 kubernetes-upgrade-20220921215522-10174 kubelet[4003]: E0921 21:58:40.371691    4003 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:42.740104  163433 out.go:239]   Sep 21 21:58:41 kubernetes-upgrade-20220921215522-10174 kubelet[4015]: E0921 21:58:41.121950    4015 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Sep 21 21:58:41 kubernetes-upgrade-20220921215522-10174 kubelet[4015]: E0921 21:58:41.121950    4015 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:42.740109  163433 out.go:239]   Sep 21 21:58:41 kubernetes-upgrade-20220921215522-10174 kubelet[4026]: E0921 21:58:41.871405    4026 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Sep 21 21:58:41 kubernetes-upgrade-20220921215522-10174 kubelet[4026]: E0921 21:58:41.871405    4026 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:42.740112  163433 out.go:239]   Sep 21 21:58:42 kubernetes-upgrade-20220921215522-10174 kubelet[4150]: E0921 21:58:42.630550    4150 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Sep 21 21:58:42 kubernetes-upgrade-20220921215522-10174 kubelet[4150]: E0921 21:58:42.630550    4150 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0921 21:58:42.740117  163433 out.go:309] Setting ErrFile to fd 2...
	I0921 21:58:42.740127  163433 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 21:58:52.740913  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:58:52.863793  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0921 21:58:52.863871  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0921 21:58:52.889997  163433 cri.go:87] found id: ""
	I0921 21:58:52.890021  163433 logs.go:274] 0 containers: []
	W0921 21:58:52.890028  163433 logs.go:276] No container was found matching "kube-apiserver"
	I0921 21:58:52.890033  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0921 21:58:52.890087  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0921 21:58:52.914624  163433 cri.go:87] found id: ""
	I0921 21:58:52.914652  163433 logs.go:274] 0 containers: []
	W0921 21:58:52.914658  163433 logs.go:276] No container was found matching "etcd"
	I0921 21:58:52.914664  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0921 21:58:52.914718  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0921 21:58:52.937740  163433 cri.go:87] found id: ""
	I0921 21:58:52.937766  163433 logs.go:274] 0 containers: []
	W0921 21:58:52.937772  163433 logs.go:276] No container was found matching "coredns"
	I0921 21:58:52.937778  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0921 21:58:52.937852  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0921 21:58:52.962432  163433 cri.go:87] found id: ""
	I0921 21:58:52.962460  163433 logs.go:274] 0 containers: []
	W0921 21:58:52.962468  163433 logs.go:276] No container was found matching "kube-scheduler"
	I0921 21:58:52.962475  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0921 21:58:52.962528  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0921 21:58:52.986468  163433 cri.go:87] found id: ""
	I0921 21:58:52.986496  163433 logs.go:274] 0 containers: []
	W0921 21:58:52.986502  163433 logs.go:276] No container was found matching "kube-proxy"
	I0921 21:58:52.986508  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0921 21:58:52.986560  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0921 21:58:53.009885  163433 cri.go:87] found id: ""
	I0921 21:58:53.009914  163433 logs.go:274] 0 containers: []
	W0921 21:58:53.009921  163433 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0921 21:58:53.009927  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0921 21:58:53.009976  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0921 21:58:53.034075  163433 cri.go:87] found id: ""
	I0921 21:58:53.034105  163433 logs.go:274] 0 containers: []
	W0921 21:58:53.034115  163433 logs.go:276] No container was found matching "storage-provisioner"
	I0921 21:58:53.034123  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0921 21:58:53.034173  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0921 21:58:53.056921  163433 cri.go:87] found id: ""
	I0921 21:58:53.056945  163433 logs.go:274] 0 containers: []
	W0921 21:58:53.056954  163433 logs.go:276] No container was found matching "kube-controller-manager"
	I0921 21:58:53.056966  163433 logs.go:123] Gathering logs for kubelet ...
	I0921 21:58:53.056980  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0921 21:58:53.073625  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:02 kubernetes-upgrade-20220921215522-10174 kubelet[3049]: E0921 21:58:02.882321    3049 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:53.074250  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:03 kubernetes-upgrade-20220921215522-10174 kubelet[3060]: E0921 21:58:03.625330    3060 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:53.074937  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:04 kubernetes-upgrade-20220921215522-10174 kubelet[3072]: E0921 21:58:04.375601    3072 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:53.075567  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:05 kubernetes-upgrade-20220921215522-10174 kubelet[3083]: E0921 21:58:05.130018    3083 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:53.076166  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:05 kubernetes-upgrade-20220921215522-10174 kubelet[3094]: E0921 21:58:05.880978    3094 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:53.076609  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:06 kubernetes-upgrade-20220921215522-10174 kubelet[3105]: E0921 21:58:06.623228    3105 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:53.076993  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:07 kubernetes-upgrade-20220921215522-10174 kubelet[3118]: E0921 21:58:07.381913    3118 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:53.077377  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:08 kubernetes-upgrade-20220921215522-10174 kubelet[3130]: E0921 21:58:08.130002    3130 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:53.077751  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:08 kubernetes-upgrade-20220921215522-10174 kubelet[3141]: E0921 21:58:08.875549    3141 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:53.078127  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:09 kubernetes-upgrade-20220921215522-10174 kubelet[3151]: E0921 21:58:09.622166    3151 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:53.078510  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:10 kubernetes-upgrade-20220921215522-10174 kubelet[3164]: E0921 21:58:10.371341    3164 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:53.078894  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:11 kubernetes-upgrade-20220921215522-10174 kubelet[3306]: E0921 21:58:11.134783    3306 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:53.079283  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:11 kubernetes-upgrade-20220921215522-10174 kubelet[3318]: E0921 21:58:11.875201    3318 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:53.079936  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:12 kubernetes-upgrade-20220921215522-10174 kubelet[3329]: E0921 21:58:12.632869    3329 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:53.080383  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:13 kubernetes-upgrade-20220921215522-10174 kubelet[3341]: E0921 21:58:13.374875    3341 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:53.080786  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:14 kubernetes-upgrade-20220921215522-10174 kubelet[3353]: E0921 21:58:14.123030    3353 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:53.081167  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:14 kubernetes-upgrade-20220921215522-10174 kubelet[3364]: E0921 21:58:14.876732    3364 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:53.081688  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:15 kubernetes-upgrade-20220921215522-10174 kubelet[3374]: E0921 21:58:15.624588    3374 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:53.082326  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:16 kubernetes-upgrade-20220921215522-10174 kubelet[3385]: E0921 21:58:16.371393    3385 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:53.082962  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:17 kubernetes-upgrade-20220921215522-10174 kubelet[3396]: E0921 21:58:17.137955    3396 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:53.083584  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:17 kubernetes-upgrade-20220921215522-10174 kubelet[3407]: E0921 21:58:17.890930    3407 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:53.084102  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:18 kubernetes-upgrade-20220921215522-10174 kubelet[3417]: E0921 21:58:18.632034    3417 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:53.084506  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:19 kubernetes-upgrade-20220921215522-10174 kubelet[3428]: E0921 21:58:19.375785    3428 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:53.085070  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:20 kubernetes-upgrade-20220921215522-10174 kubelet[3439]: E0921 21:58:20.129184    3439 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:53.085475  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:20 kubernetes-upgrade-20220921215522-10174 kubelet[3451]: E0921 21:58:20.880762    3451 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:53.085860  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:21 kubernetes-upgrade-20220921215522-10174 kubelet[3597]: E0921 21:58:21.630652    3597 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:53.086235  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:22 kubernetes-upgrade-20220921215522-10174 kubelet[3607]: E0921 21:58:22.370329    3607 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:53.086633  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:23 kubernetes-upgrade-20220921215522-10174 kubelet[3618]: E0921 21:58:23.123363    3618 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:53.087007  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:23 kubernetes-upgrade-20220921215522-10174 kubelet[3629]: E0921 21:58:23.871937    3629 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:53.087393  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:24 kubernetes-upgrade-20220921215522-10174 kubelet[3640]: E0921 21:58:24.623599    3640 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:53.087792  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:25 kubernetes-upgrade-20220921215522-10174 kubelet[3651]: E0921 21:58:25.373329    3651 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:53.088170  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:26 kubernetes-upgrade-20220921215522-10174 kubelet[3664]: E0921 21:58:26.122744    3664 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:53.088550  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:26 kubernetes-upgrade-20220921215522-10174 kubelet[3675]: E0921 21:58:26.871823    3675 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:53.088931  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:27 kubernetes-upgrade-20220921215522-10174 kubelet[3686]: E0921 21:58:27.622585    3686 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:53.089338  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:28 kubernetes-upgrade-20220921215522-10174 kubelet[3697]: E0921 21:58:28.376468    3697 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:53.089724  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:29 kubernetes-upgrade-20220921215522-10174 kubelet[3708]: E0921 21:58:29.132047    3708 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:53.090099  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:29 kubernetes-upgrade-20220921215522-10174 kubelet[3719]: E0921 21:58:29.877817    3719 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:53.090506  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:30 kubernetes-upgrade-20220921215522-10174 kubelet[3730]: E0921 21:58:30.636585    3730 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:53.090889  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:31 kubernetes-upgrade-20220921215522-10174 kubelet[3740]: E0921 21:58:31.375404    3740 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:53.091266  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:32 kubernetes-upgrade-20220921215522-10174 kubelet[3854]: E0921 21:58:32.129853    3854 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:53.091825  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:32 kubernetes-upgrade-20220921215522-10174 kubelet[3898]: E0921 21:58:32.873032    3898 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:53.092278  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:33 kubernetes-upgrade-20220921215522-10174 kubelet[3910]: E0921 21:58:33.628639    3910 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:53.092705  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:34 kubernetes-upgrade-20220921215522-10174 kubelet[3921]: E0921 21:58:34.371911    3921 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:53.093102  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:35 kubernetes-upgrade-20220921215522-10174 kubelet[3932]: E0921 21:58:35.133679    3932 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:53.093497  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:35 kubernetes-upgrade-20220921215522-10174 kubelet[3942]: E0921 21:58:35.894996    3942 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:53.093937  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:36 kubernetes-upgrade-20220921215522-10174 kubelet[3952]: E0921 21:58:36.645679    3952 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:53.094354  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:37 kubernetes-upgrade-20220921215522-10174 kubelet[3962]: E0921 21:58:37.382599    3962 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:53.094757  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:38 kubernetes-upgrade-20220921215522-10174 kubelet[3972]: E0921 21:58:38.148834    3972 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:53.095151  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:38 kubernetes-upgrade-20220921215522-10174 kubelet[3982]: E0921 21:58:38.878739    3982 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:53.095711  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:39 kubernetes-upgrade-20220921215522-10174 kubelet[3993]: E0921 21:58:39.634330    3993 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:53.096365  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:40 kubernetes-upgrade-20220921215522-10174 kubelet[4003]: E0921 21:58:40.371691    4003 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:53.096812  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:41 kubernetes-upgrade-20220921215522-10174 kubelet[4015]: E0921 21:58:41.121950    4015 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:53.097192  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:41 kubernetes-upgrade-20220921215522-10174 kubelet[4026]: E0921 21:58:41.871405    4026 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:53.097683  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:42 kubernetes-upgrade-20220921215522-10174 kubelet[4150]: E0921 21:58:42.630550    4150 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:53.098241  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:43 kubernetes-upgrade-20220921215522-10174 kubelet[4185]: E0921 21:58:43.372417    4185 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:53.098718  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:44 kubernetes-upgrade-20220921215522-10174 kubelet[4197]: E0921 21:58:44.122124    4197 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:53.099191  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:44 kubernetes-upgrade-20220921215522-10174 kubelet[4209]: E0921 21:58:44.872708    4209 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:53.099680  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:45 kubernetes-upgrade-20220921215522-10174 kubelet[4219]: E0921 21:58:45.620684    4219 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:53.100088  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:46 kubernetes-upgrade-20220921215522-10174 kubelet[4230]: E0921 21:58:46.371087    4230 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:53.100522  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:47 kubernetes-upgrade-20220921215522-10174 kubelet[4243]: E0921 21:58:47.121413    4243 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:53.100925  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:47 kubernetes-upgrade-20220921215522-10174 kubelet[4254]: E0921 21:58:47.872018    4254 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:53.101326  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:48 kubernetes-upgrade-20220921215522-10174 kubelet[4264]: E0921 21:58:48.621704    4264 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:53.101721  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:49 kubernetes-upgrade-20220921215522-10174 kubelet[4275]: E0921 21:58:49.371878    4275 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:53.102110  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:50 kubernetes-upgrade-20220921215522-10174 kubelet[4286]: E0921 21:58:50.122084    4286 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:53.102507  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:50 kubernetes-upgrade-20220921215522-10174 kubelet[4297]: E0921 21:58:50.876452    4297 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:53.103084  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:51 kubernetes-upgrade-20220921215522-10174 kubelet[4307]: E0921 21:58:51.627411    4307 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:53.103563  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:52 kubernetes-upgrade-20220921215522-10174 kubelet[4318]: E0921 21:58:52.373933    4318 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0921 21:58:53.103713  163433 logs.go:123] Gathering logs for dmesg ...
	I0921 21:58:53.103744  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0921 21:58:53.119036  163433 logs.go:123] Gathering logs for describe nodes ...
	I0921 21:58:53.119064  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0921 21:58:53.174025  163433 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0921 21:58:53.174054  163433 logs.go:123] Gathering logs for containerd ...
	I0921 21:58:53.174069  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0921 21:58:53.212843  163433 logs.go:123] Gathering logs for container status ...
	I0921 21:58:53.212877  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0921 21:58:53.244885  163433 out.go:309] Setting ErrFile to fd 2...
	I0921 21:58:53.244915  163433 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0921 21:58:53.245034  163433 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0921 21:58:53.245050  163433 out.go:239]   Sep 21 21:58:49 kubernetes-upgrade-20220921215522-10174 kubelet[4275]: E0921 21:58:49.371878    4275 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Sep 21 21:58:49 kubernetes-upgrade-20220921215522-10174 kubelet[4275]: E0921 21:58:49.371878    4275 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:53.245060  163433 out.go:239]   Sep 21 21:58:50 kubernetes-upgrade-20220921215522-10174 kubelet[4286]: E0921 21:58:50.122084    4286 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Sep 21 21:58:50 kubernetes-upgrade-20220921215522-10174 kubelet[4286]: E0921 21:58:50.122084    4286 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:53.245072  163433 out.go:239]   Sep 21 21:58:50 kubernetes-upgrade-20220921215522-10174 kubelet[4297]: E0921 21:58:50.876452    4297 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Sep 21 21:58:50 kubernetes-upgrade-20220921215522-10174 kubelet[4297]: E0921 21:58:50.876452    4297 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:53.245079  163433 out.go:239]   Sep 21 21:58:51 kubernetes-upgrade-20220921215522-10174 kubelet[4307]: E0921 21:58:51.627411    4307 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Sep 21 21:58:51 kubernetes-upgrade-20220921215522-10174 kubelet[4307]: E0921 21:58:51.627411    4307 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:58:53.245094  163433 out.go:239]   Sep 21 21:58:52 kubernetes-upgrade-20220921215522-10174 kubelet[4318]: E0921 21:58:52.373933    4318 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Sep 21 21:58:52 kubernetes-upgrade-20220921215522-10174 kubelet[4318]: E0921 21:58:52.373933    4318 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0921 21:58:53.245105  163433 out.go:309] Setting ErrFile to fd 2...
	I0921 21:58:53.245133  163433 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 21:59:03.245553  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:59:03.363405  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0921 21:59:03.363497  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0921 21:59:03.387969  163433 cri.go:87] found id: ""
	I0921 21:59:03.388005  163433 logs.go:274] 0 containers: []
	W0921 21:59:03.388017  163433 logs.go:276] No container was found matching "kube-apiserver"
	I0921 21:59:03.388025  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0921 21:59:03.388086  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0921 21:59:03.411104  163433 cri.go:87] found id: ""
	I0921 21:59:03.411137  163433 logs.go:274] 0 containers: []
	W0921 21:59:03.411144  163433 logs.go:276] No container was found matching "etcd"
	I0921 21:59:03.411149  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0921 21:59:03.411194  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0921 21:59:03.435637  163433 cri.go:87] found id: ""
	I0921 21:59:03.435667  163433 logs.go:274] 0 containers: []
	W0921 21:59:03.435676  163433 logs.go:276] No container was found matching "coredns"
	I0921 21:59:03.435684  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0921 21:59:03.435816  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0921 21:59:03.460863  163433 cri.go:87] found id: ""
	I0921 21:59:03.460896  163433 logs.go:274] 0 containers: []
	W0921 21:59:03.460903  163433 logs.go:276] No container was found matching "kube-scheduler"
	I0921 21:59:03.460909  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0921 21:59:03.460961  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0921 21:59:03.486244  163433 cri.go:87] found id: ""
	I0921 21:59:03.486273  163433 logs.go:274] 0 containers: []
	W0921 21:59:03.486282  163433 logs.go:276] No container was found matching "kube-proxy"
	I0921 21:59:03.486290  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0921 21:59:03.486345  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0921 21:59:03.512376  163433 cri.go:87] found id: ""
	I0921 21:59:03.512406  163433 logs.go:274] 0 containers: []
	W0921 21:59:03.512415  163433 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0921 21:59:03.512423  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0921 21:59:03.512477  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0921 21:59:03.540553  163433 cri.go:87] found id: ""
	I0921 21:59:03.540585  163433 logs.go:274] 0 containers: []
	W0921 21:59:03.540593  163433 logs.go:276] No container was found matching "storage-provisioner"
	I0921 21:59:03.540601  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0921 21:59:03.540658  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0921 21:59:03.570035  163433 cri.go:87] found id: ""
	I0921 21:59:03.570063  163433 logs.go:274] 0 containers: []
	W0921 21:59:03.570074  163433 logs.go:276] No container was found matching "kube-controller-manager"
	I0921 21:59:03.570087  163433 logs.go:123] Gathering logs for container status ...
	I0921 21:59:03.570102  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0921 21:59:03.602311  163433 logs.go:123] Gathering logs for kubelet ...
	I0921 21:59:03.602345  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0921 21:59:03.623938  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:14 kubernetes-upgrade-20220921215522-10174 kubelet[3353]: E0921 21:58:14.123030    3353 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:03.624591  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:14 kubernetes-upgrade-20220921215522-10174 kubelet[3364]: E0921 21:58:14.876732    3364 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:03.625292  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:15 kubernetes-upgrade-20220921215522-10174 kubelet[3374]: E0921 21:58:15.624588    3374 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:03.625985  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:16 kubernetes-upgrade-20220921215522-10174 kubelet[3385]: E0921 21:58:16.371393    3385 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:03.626674  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:17 kubernetes-upgrade-20220921215522-10174 kubelet[3396]: E0921 21:58:17.137955    3396 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:03.627359  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:17 kubernetes-upgrade-20220921215522-10174 kubelet[3407]: E0921 21:58:17.890930    3407 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:03.628139  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:18 kubernetes-upgrade-20220921215522-10174 kubelet[3417]: E0921 21:58:18.632034    3417 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:03.628856  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:19 kubernetes-upgrade-20220921215522-10174 kubelet[3428]: E0921 21:58:19.375785    3428 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:03.629550  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:20 kubernetes-upgrade-20220921215522-10174 kubelet[3439]: E0921 21:58:20.129184    3439 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:03.630208  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:20 kubernetes-upgrade-20220921215522-10174 kubelet[3451]: E0921 21:58:20.880762    3451 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:03.630758  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:21 kubernetes-upgrade-20220921215522-10174 kubelet[3597]: E0921 21:58:21.630652    3597 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:03.631213  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:22 kubernetes-upgrade-20220921215522-10174 kubelet[3607]: E0921 21:58:22.370329    3607 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:03.631688  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:23 kubernetes-upgrade-20220921215522-10174 kubelet[3618]: E0921 21:58:23.123363    3618 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:03.632162  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:23 kubernetes-upgrade-20220921215522-10174 kubelet[3629]: E0921 21:58:23.871937    3629 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:03.632586  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:24 kubernetes-upgrade-20220921215522-10174 kubelet[3640]: E0921 21:58:24.623599    3640 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:03.633009  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:25 kubernetes-upgrade-20220921215522-10174 kubelet[3651]: E0921 21:58:25.373329    3651 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:03.633558  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:26 kubernetes-upgrade-20220921215522-10174 kubelet[3664]: E0921 21:58:26.122744    3664 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:03.633977  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:26 kubernetes-upgrade-20220921215522-10174 kubelet[3675]: E0921 21:58:26.871823    3675 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:03.634384  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:27 kubernetes-upgrade-20220921215522-10174 kubelet[3686]: E0921 21:58:27.622585    3686 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:03.634790  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:28 kubernetes-upgrade-20220921215522-10174 kubelet[3697]: E0921 21:58:28.376468    3697 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:03.635192  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:29 kubernetes-upgrade-20220921215522-10174 kubelet[3708]: E0921 21:58:29.132047    3708 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:03.635607  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:29 kubernetes-upgrade-20220921215522-10174 kubelet[3719]: E0921 21:58:29.877817    3719 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:03.636180  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:30 kubernetes-upgrade-20220921215522-10174 kubelet[3730]: E0921 21:58:30.636585    3730 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:03.636584  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:31 kubernetes-upgrade-20220921215522-10174 kubelet[3740]: E0921 21:58:31.375404    3740 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:03.637080  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:32 kubernetes-upgrade-20220921215522-10174 kubelet[3854]: E0921 21:58:32.129853    3854 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:03.637491  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:32 kubernetes-upgrade-20220921215522-10174 kubelet[3898]: E0921 21:58:32.873032    3898 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:03.637903  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:33 kubernetes-upgrade-20220921215522-10174 kubelet[3910]: E0921 21:58:33.628639    3910 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:03.638538  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:34 kubernetes-upgrade-20220921215522-10174 kubelet[3921]: E0921 21:58:34.371911    3921 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:03.639227  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:35 kubernetes-upgrade-20220921215522-10174 kubelet[3932]: E0921 21:58:35.133679    3932 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:03.639993  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:35 kubernetes-upgrade-20220921215522-10174 kubelet[3942]: E0921 21:58:35.894996    3942 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:03.640608  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:36 kubernetes-upgrade-20220921215522-10174 kubelet[3952]: E0921 21:58:36.645679    3952 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:03.641273  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:37 kubernetes-upgrade-20220921215522-10174 kubelet[3962]: E0921 21:58:37.382599    3962 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:03.641904  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:38 kubernetes-upgrade-20220921215522-10174 kubelet[3972]: E0921 21:58:38.148834    3972 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:03.642580  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:38 kubernetes-upgrade-20220921215522-10174 kubelet[3982]: E0921 21:58:38.878739    3982 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:03.643255  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:39 kubernetes-upgrade-20220921215522-10174 kubelet[3993]: E0921 21:58:39.634330    3993 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:03.643980  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:40 kubernetes-upgrade-20220921215522-10174 kubelet[4003]: E0921 21:58:40.371691    4003 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:03.644717  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:41 kubernetes-upgrade-20220921215522-10174 kubelet[4015]: E0921 21:58:41.121950    4015 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:03.645453  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:41 kubernetes-upgrade-20220921215522-10174 kubelet[4026]: E0921 21:58:41.871405    4026 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:03.646186  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:42 kubernetes-upgrade-20220921215522-10174 kubelet[4150]: E0921 21:58:42.630550    4150 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:03.646881  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:43 kubernetes-upgrade-20220921215522-10174 kubelet[4185]: E0921 21:58:43.372417    4185 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:03.647585  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:44 kubernetes-upgrade-20220921215522-10174 kubelet[4197]: E0921 21:58:44.122124    4197 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:03.648274  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:44 kubernetes-upgrade-20220921215522-10174 kubelet[4209]: E0921 21:58:44.872708    4209 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:03.648983  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:45 kubernetes-upgrade-20220921215522-10174 kubelet[4219]: E0921 21:58:45.620684    4219 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:03.649729  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:46 kubernetes-upgrade-20220921215522-10174 kubelet[4230]: E0921 21:58:46.371087    4230 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:03.650412  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:47 kubernetes-upgrade-20220921215522-10174 kubelet[4243]: E0921 21:58:47.121413    4243 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:03.651088  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:47 kubernetes-upgrade-20220921215522-10174 kubelet[4254]: E0921 21:58:47.872018    4254 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:03.651735  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:48 kubernetes-upgrade-20220921215522-10174 kubelet[4264]: E0921 21:58:48.621704    4264 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:03.652208  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:49 kubernetes-upgrade-20220921215522-10174 kubelet[4275]: E0921 21:58:49.371878    4275 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:03.652625  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:50 kubernetes-upgrade-20220921215522-10174 kubelet[4286]: E0921 21:58:50.122084    4286 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:03.653051  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:50 kubernetes-upgrade-20220921215522-10174 kubelet[4297]: E0921 21:58:50.876452    4297 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:03.653460  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:51 kubernetes-upgrade-20220921215522-10174 kubelet[4307]: E0921 21:58:51.627411    4307 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:03.653922  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:52 kubernetes-upgrade-20220921215522-10174 kubelet[4318]: E0921 21:58:52.373933    4318 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:03.654371  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:53 kubernetes-upgrade-20220921215522-10174 kubelet[4436]: E0921 21:58:53.129656    4436 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:03.654790  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:53 kubernetes-upgrade-20220921215522-10174 kubelet[4480]: E0921 21:58:53.875326    4480 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:03.655199  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:54 kubernetes-upgrade-20220921215522-10174 kubelet[4491]: E0921 21:58:54.630207    4491 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:03.655596  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:55 kubernetes-upgrade-20220921215522-10174 kubelet[4505]: E0921 21:58:55.382351    4505 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:03.656025  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:56 kubernetes-upgrade-20220921215522-10174 kubelet[4515]: E0921 21:58:56.132997    4515 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:03.656425  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:56 kubernetes-upgrade-20220921215522-10174 kubelet[4524]: E0921 21:58:56.895570    4524 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:03.656842  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:57 kubernetes-upgrade-20220921215522-10174 kubelet[4533]: E0921 21:58:57.642639    4533 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:03.657313  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:58 kubernetes-upgrade-20220921215522-10174 kubelet[4545]: E0921 21:58:58.392797    4545 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:03.657706  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:59 kubernetes-upgrade-20220921215522-10174 kubelet[4554]: E0921 21:58:59.134963    4554 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:03.658136  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:59 kubernetes-upgrade-20220921215522-10174 kubelet[4566]: E0921 21:58:59.874689    4566 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:03.658687  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:00 kubernetes-upgrade-20220921215522-10174 kubelet[4577]: E0921 21:59:00.646186    4577 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:03.659307  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:01 kubernetes-upgrade-20220921215522-10174 kubelet[4587]: E0921 21:59:01.371592    4587 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:03.659938  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:02 kubernetes-upgrade-20220921215522-10174 kubelet[4598]: E0921 21:59:02.124561    4598 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:03.660598  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:02 kubernetes-upgrade-20220921215522-10174 kubelet[4610]: E0921 21:59:02.871389    4610 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0921 21:59:03.661101  163433 logs.go:123] Gathering logs for dmesg ...
	I0921 21:59:03.661121  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0921 21:59:03.684737  163433 logs.go:123] Gathering logs for describe nodes ...
	I0921 21:59:03.684833  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0921 21:59:03.774500  163433 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0921 21:59:03.774530  163433 logs.go:123] Gathering logs for containerd ...
	I0921 21:59:03.774542  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0921 21:59:03.826613  163433 out.go:309] Setting ErrFile to fd 2...
	I0921 21:59:03.826652  163433 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0921 21:59:03.826794  163433 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0921 21:59:03.826811  163433 out.go:239]   Sep 21 21:58:59 kubernetes-upgrade-20220921215522-10174 kubelet[4566]: E0921 21:58:59.874689    4566 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Sep 21 21:58:59 kubernetes-upgrade-20220921215522-10174 kubelet[4566]: E0921 21:58:59.874689    4566 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:03.826822  163433 out.go:239]   Sep 21 21:59:00 kubernetes-upgrade-20220921215522-10174 kubelet[4577]: E0921 21:59:00.646186    4577 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Sep 21 21:59:00 kubernetes-upgrade-20220921215522-10174 kubelet[4577]: E0921 21:59:00.646186    4577 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:03.826833  163433 out.go:239]   Sep 21 21:59:01 kubernetes-upgrade-20220921215522-10174 kubelet[4587]: E0921 21:59:01.371592    4587 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Sep 21 21:59:01 kubernetes-upgrade-20220921215522-10174 kubelet[4587]: E0921 21:59:01.371592    4587 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:03.826842  163433 out.go:239]   Sep 21 21:59:02 kubernetes-upgrade-20220921215522-10174 kubelet[4598]: E0921 21:59:02.124561    4598 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Sep 21 21:59:02 kubernetes-upgrade-20220921215522-10174 kubelet[4598]: E0921 21:59:02.124561    4598 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:03.826854  163433 out.go:239]   Sep 21 21:59:02 kubernetes-upgrade-20220921215522-10174 kubelet[4610]: E0921 21:59:02.871389    4610 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Sep 21 21:59:02 kubernetes-upgrade-20220921215522-10174 kubelet[4610]: E0921 21:59:02.871389    4610 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0921 21:59:03.826864  163433 out.go:309] Setting ErrFile to fd 2...
	I0921 21:59:03.826871  163433 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 21:59:13.827459  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:59:13.862951  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0921 21:59:13.863043  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0921 21:59:13.889021  163433 cri.go:87] found id: ""
	I0921 21:59:13.889049  163433 logs.go:274] 0 containers: []
	W0921 21:59:13.889059  163433 logs.go:276] No container was found matching "kube-apiserver"
	I0921 21:59:13.889068  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0921 21:59:13.889132  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0921 21:59:13.917766  163433 cri.go:87] found id: ""
	I0921 21:59:13.917797  163433 logs.go:274] 0 containers: []
	W0921 21:59:13.917806  163433 logs.go:276] No container was found matching "etcd"
	I0921 21:59:13.917813  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0921 21:59:13.917869  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0921 21:59:13.944016  163433 cri.go:87] found id: ""
	I0921 21:59:13.944044  163433 logs.go:274] 0 containers: []
	W0921 21:59:13.944051  163433 logs.go:276] No container was found matching "coredns"
	I0921 21:59:13.944057  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0921 21:59:13.944109  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0921 21:59:13.997324  163433 cri.go:87] found id: ""
	I0921 21:59:13.997351  163433 logs.go:274] 0 containers: []
	W0921 21:59:13.997357  163433 logs.go:276] No container was found matching "kube-scheduler"
	I0921 21:59:13.997362  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0921 21:59:13.997412  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0921 21:59:14.029255  163433 cri.go:87] found id: ""
	I0921 21:59:14.029281  163433 logs.go:274] 0 containers: []
	W0921 21:59:14.029290  163433 logs.go:276] No container was found matching "kube-proxy"
	I0921 21:59:14.029297  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0921 21:59:14.029353  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0921 21:59:14.058305  163433 cri.go:87] found id: ""
	I0921 21:59:14.058337  163433 logs.go:274] 0 containers: []
	W0921 21:59:14.058346  163433 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0921 21:59:14.058353  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0921 21:59:14.058408  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0921 21:59:14.084834  163433 cri.go:87] found id: ""
	I0921 21:59:14.084869  163433 logs.go:274] 0 containers: []
	W0921 21:59:14.084880  163433 logs.go:276] No container was found matching "storage-provisioner"
	I0921 21:59:14.084889  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0921 21:59:14.084943  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0921 21:59:14.120731  163433 cri.go:87] found id: ""
	I0921 21:59:14.120756  163433 logs.go:274] 0 containers: []
	W0921 21:59:14.120765  163433 logs.go:276] No container was found matching "kube-controller-manager"
	I0921 21:59:14.120778  163433 logs.go:123] Gathering logs for dmesg ...
	I0921 21:59:14.120792  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0921 21:59:14.144355  163433 logs.go:123] Gathering logs for describe nodes ...
	I0921 21:59:14.144399  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0921 21:59:14.228833  163433 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0921 21:59:14.228871  163433 logs.go:123] Gathering logs for containerd ...
	I0921 21:59:14.228884  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0921 21:59:14.281289  163433 logs.go:123] Gathering logs for container status ...
	I0921 21:59:14.281436  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0921 21:59:14.358951  163433 logs.go:123] Gathering logs for kubelet ...
	I0921 21:59:14.358980  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0921 21:59:14.380506  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:24 kubernetes-upgrade-20220921215522-10174 kubelet[3640]: E0921 21:58:24.623599    3640 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:14.381201  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:25 kubernetes-upgrade-20220921215522-10174 kubelet[3651]: E0921 21:58:25.373329    3651 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:14.381905  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:26 kubernetes-upgrade-20220921215522-10174 kubelet[3664]: E0921 21:58:26.122744    3664 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:14.382622  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:26 kubernetes-upgrade-20220921215522-10174 kubelet[3675]: E0921 21:58:26.871823    3675 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:14.383327  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:27 kubernetes-upgrade-20220921215522-10174 kubelet[3686]: E0921 21:58:27.622585    3686 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:14.384067  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:28 kubernetes-upgrade-20220921215522-10174 kubelet[3697]: E0921 21:58:28.376468    3697 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:14.384815  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:29 kubernetes-upgrade-20220921215522-10174 kubelet[3708]: E0921 21:58:29.132047    3708 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:14.385560  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:29 kubernetes-upgrade-20220921215522-10174 kubelet[3719]: E0921 21:58:29.877817    3719 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:14.386268  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:30 kubernetes-upgrade-20220921215522-10174 kubelet[3730]: E0921 21:58:30.636585    3730 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:14.387018  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:31 kubernetes-upgrade-20220921215522-10174 kubelet[3740]: E0921 21:58:31.375404    3740 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:14.387826  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:32 kubernetes-upgrade-20220921215522-10174 kubelet[3854]: E0921 21:58:32.129853    3854 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:14.388593  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:32 kubernetes-upgrade-20220921215522-10174 kubelet[3898]: E0921 21:58:32.873032    3898 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:14.389294  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:33 kubernetes-upgrade-20220921215522-10174 kubelet[3910]: E0921 21:58:33.628639    3910 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:14.389995  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:34 kubernetes-upgrade-20220921215522-10174 kubelet[3921]: E0921 21:58:34.371911    3921 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:14.390705  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:35 kubernetes-upgrade-20220921215522-10174 kubelet[3932]: E0921 21:58:35.133679    3932 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:14.391412  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:35 kubernetes-upgrade-20220921215522-10174 kubelet[3942]: E0921 21:58:35.894996    3942 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:14.392138  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:36 kubernetes-upgrade-20220921215522-10174 kubelet[3952]: E0921 21:58:36.645679    3952 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:14.392872  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:37 kubernetes-upgrade-20220921215522-10174 kubelet[3962]: E0921 21:58:37.382599    3962 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:14.393588  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:38 kubernetes-upgrade-20220921215522-10174 kubelet[3972]: E0921 21:58:38.148834    3972 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:14.394295  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:38 kubernetes-upgrade-20220921215522-10174 kubelet[3982]: E0921 21:58:38.878739    3982 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:14.395016  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:39 kubernetes-upgrade-20220921215522-10174 kubelet[3993]: E0921 21:58:39.634330    3993 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:14.395734  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:40 kubernetes-upgrade-20220921215522-10174 kubelet[4003]: E0921 21:58:40.371691    4003 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:14.396470  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:41 kubernetes-upgrade-20220921215522-10174 kubelet[4015]: E0921 21:58:41.121950    4015 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:14.397079  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:41 kubernetes-upgrade-20220921215522-10174 kubelet[4026]: E0921 21:58:41.871405    4026 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:14.397537  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:42 kubernetes-upgrade-20220921215522-10174 kubelet[4150]: E0921 21:58:42.630550    4150 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:14.397968  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:43 kubernetes-upgrade-20220921215522-10174 kubelet[4185]: E0921 21:58:43.372417    4185 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:14.398701  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:44 kubernetes-upgrade-20220921215522-10174 kubelet[4197]: E0921 21:58:44.122124    4197 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:14.399391  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:44 kubernetes-upgrade-20220921215522-10174 kubelet[4209]: E0921 21:58:44.872708    4209 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:14.400161  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:45 kubernetes-upgrade-20220921215522-10174 kubelet[4219]: E0921 21:58:45.620684    4219 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:14.400815  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:46 kubernetes-upgrade-20220921215522-10174 kubelet[4230]: E0921 21:58:46.371087    4230 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:14.401491  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:47 kubernetes-upgrade-20220921215522-10174 kubelet[4243]: E0921 21:58:47.121413    4243 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:14.402259  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:47 kubernetes-upgrade-20220921215522-10174 kubelet[4254]: E0921 21:58:47.872018    4254 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:14.402699  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:48 kubernetes-upgrade-20220921215522-10174 kubelet[4264]: E0921 21:58:48.621704    4264 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:14.403097  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:49 kubernetes-upgrade-20220921215522-10174 kubelet[4275]: E0921 21:58:49.371878    4275 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:14.403494  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:50 kubernetes-upgrade-20220921215522-10174 kubelet[4286]: E0921 21:58:50.122084    4286 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:14.403967  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:50 kubernetes-upgrade-20220921215522-10174 kubelet[4297]: E0921 21:58:50.876452    4297 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:14.404610  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:51 kubernetes-upgrade-20220921215522-10174 kubelet[4307]: E0921 21:58:51.627411    4307 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:14.405259  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:52 kubernetes-upgrade-20220921215522-10174 kubelet[4318]: E0921 21:58:52.373933    4318 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:14.405932  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:53 kubernetes-upgrade-20220921215522-10174 kubelet[4436]: E0921 21:58:53.129656    4436 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:14.406692  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:53 kubernetes-upgrade-20220921215522-10174 kubelet[4480]: E0921 21:58:53.875326    4480 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:14.407374  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:54 kubernetes-upgrade-20220921215522-10174 kubelet[4491]: E0921 21:58:54.630207    4491 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:14.408179  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:55 kubernetes-upgrade-20220921215522-10174 kubelet[4505]: E0921 21:58:55.382351    4505 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:14.408997  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:56 kubernetes-upgrade-20220921215522-10174 kubelet[4515]: E0921 21:58:56.132997    4515 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:14.409755  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:56 kubernetes-upgrade-20220921215522-10174 kubelet[4524]: E0921 21:58:56.895570    4524 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:14.410477  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:57 kubernetes-upgrade-20220921215522-10174 kubelet[4533]: E0921 21:58:57.642639    4533 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:14.411142  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:58 kubernetes-upgrade-20220921215522-10174 kubelet[4545]: E0921 21:58:58.392797    4545 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:14.411530  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:59 kubernetes-upgrade-20220921215522-10174 kubelet[4554]: E0921 21:58:59.134963    4554 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:14.411994  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:59 kubernetes-upgrade-20220921215522-10174 kubelet[4566]: E0921 21:58:59.874689    4566 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:14.412616  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:00 kubernetes-upgrade-20220921215522-10174 kubelet[4577]: E0921 21:59:00.646186    4577 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:14.413275  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:01 kubernetes-upgrade-20220921215522-10174 kubelet[4587]: E0921 21:59:01.371592    4587 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:14.413938  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:02 kubernetes-upgrade-20220921215522-10174 kubelet[4598]: E0921 21:59:02.124561    4598 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:14.414568  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:02 kubernetes-upgrade-20220921215522-10174 kubelet[4610]: E0921 21:59:02.871389    4610 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:14.415219  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:03 kubernetes-upgrade-20220921215522-10174 kubelet[4727]: E0921 21:59:03.637908    4727 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:14.415888  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:04 kubernetes-upgrade-20220921215522-10174 kubelet[4764]: E0921 21:59:04.376354    4764 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:14.416586  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:05 kubernetes-upgrade-20220921215522-10174 kubelet[4774]: E0921 21:59:05.130669    4774 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:14.417308  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:05 kubernetes-upgrade-20220921215522-10174 kubelet[4786]: E0921 21:59:05.874316    4786 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:14.417981  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:06 kubernetes-upgrade-20220921215522-10174 kubelet[4797]: E0921 21:59:06.621813    4797 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:14.418648  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:07 kubernetes-upgrade-20220921215522-10174 kubelet[4808]: E0921 21:59:07.380124    4808 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:14.419353  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:08 kubernetes-upgrade-20220921215522-10174 kubelet[4819]: E0921 21:59:08.126226    4819 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:14.420065  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:08 kubernetes-upgrade-20220921215522-10174 kubelet[4830]: E0921 21:59:08.872713    4830 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:14.420761  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:09 kubernetes-upgrade-20220921215522-10174 kubelet[4840]: E0921 21:59:09.623509    4840 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:14.421474  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:10 kubernetes-upgrade-20220921215522-10174 kubelet[4850]: E0921 21:59:10.374150    4850 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:14.422180  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:11 kubernetes-upgrade-20220921215522-10174 kubelet[4861]: E0921 21:59:11.139623    4861 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:14.422889  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:11 kubernetes-upgrade-20220921215522-10174 kubelet[4872]: E0921 21:59:11.874420    4872 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:14.423592  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:12 kubernetes-upgrade-20220921215522-10174 kubelet[4883]: E0921 21:59:12.643620    4883 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:14.424317  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:13 kubernetes-upgrade-20220921215522-10174 kubelet[4893]: E0921 21:59:13.393673    4893 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:14.425029  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:14 kubernetes-upgrade-20220921215522-10174 kubelet[4988]: E0921 21:59:14.157483    4988 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0921 21:59:14.425264  163433 out.go:309] Setting ErrFile to fd 2...
	I0921 21:59:14.425281  163433 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0921 21:59:14.425406  163433 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0921 21:59:14.425461  163433 out.go:239]   Sep 21 21:59:11 kubernetes-upgrade-20220921215522-10174 kubelet[4861]: E0921 21:59:11.139623    4861 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Sep 21 21:59:11 kubernetes-upgrade-20220921215522-10174 kubelet[4861]: E0921 21:59:11.139623    4861 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:14.425486  163433 out.go:239]   Sep 21 21:59:11 kubernetes-upgrade-20220921215522-10174 kubelet[4872]: E0921 21:59:11.874420    4872 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Sep 21 21:59:11 kubernetes-upgrade-20220921215522-10174 kubelet[4872]: E0921 21:59:11.874420    4872 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:14.425508  163433 out.go:239]   Sep 21 21:59:12 kubernetes-upgrade-20220921215522-10174 kubelet[4883]: E0921 21:59:12.643620    4883 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Sep 21 21:59:12 kubernetes-upgrade-20220921215522-10174 kubelet[4883]: E0921 21:59:12.643620    4883 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:14.425531  163433 out.go:239]   Sep 21 21:59:13 kubernetes-upgrade-20220921215522-10174 kubelet[4893]: E0921 21:59:13.393673    4893 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Sep 21 21:59:13 kubernetes-upgrade-20220921215522-10174 kubelet[4893]: E0921 21:59:13.393673    4893 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:14.425553  163433 out.go:239]   Sep 21 21:59:14 kubernetes-upgrade-20220921215522-10174 kubelet[4988]: E0921 21:59:14.157483    4988 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Sep 21 21:59:14 kubernetes-upgrade-20220921215522-10174 kubelet[4988]: E0921 21:59:14.157483    4988 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0921 21:59:14.425566  163433 out.go:309] Setting ErrFile to fd 2...
	I0921 21:59:14.425574  163433 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 21:59:24.427329  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:59:24.862945  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0921 21:59:24.863012  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0921 21:59:24.889625  163433 cri.go:87] found id: ""
	I0921 21:59:24.889655  163433 logs.go:274] 0 containers: []
	W0921 21:59:24.889664  163433 logs.go:276] No container was found matching "kube-apiserver"
	I0921 21:59:24.889672  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0921 21:59:24.889719  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0921 21:59:24.916752  163433 cri.go:87] found id: ""
	I0921 21:59:24.916775  163433 logs.go:274] 0 containers: []
	W0921 21:59:24.916781  163433 logs.go:276] No container was found matching "etcd"
	I0921 21:59:24.916787  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0921 21:59:24.916843  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0921 21:59:24.942910  163433 cri.go:87] found id: ""
	I0921 21:59:24.942936  163433 logs.go:274] 0 containers: []
	W0921 21:59:24.942942  163433 logs.go:276] No container was found matching "coredns"
	I0921 21:59:24.942948  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0921 21:59:24.943006  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0921 21:59:24.967748  163433 cri.go:87] found id: ""
	I0921 21:59:24.967781  163433 logs.go:274] 0 containers: []
	W0921 21:59:24.967790  163433 logs.go:276] No container was found matching "kube-scheduler"
	I0921 21:59:24.967797  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0921 21:59:24.967879  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0921 21:59:24.992649  163433 cri.go:87] found id: ""
	I0921 21:59:24.992683  163433 logs.go:274] 0 containers: []
	W0921 21:59:24.992693  163433 logs.go:276] No container was found matching "kube-proxy"
	I0921 21:59:24.992701  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0921 21:59:24.992788  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0921 21:59:25.017584  163433 cri.go:87] found id: ""
	I0921 21:59:25.017606  163433 logs.go:274] 0 containers: []
	W0921 21:59:25.017612  163433 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0921 21:59:25.017618  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0921 21:59:25.017662  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0921 21:59:25.045775  163433 cri.go:87] found id: ""
	I0921 21:59:25.045806  163433 logs.go:274] 0 containers: []
	W0921 21:59:25.045816  163433 logs.go:276] No container was found matching "storage-provisioner"
	I0921 21:59:25.045825  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0921 21:59:25.045904  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0921 21:59:25.075250  163433 cri.go:87] found id: ""
	I0921 21:59:25.075273  163433 logs.go:274] 0 containers: []
	W0921 21:59:25.075279  163433 logs.go:276] No container was found matching "kube-controller-manager"
	I0921 21:59:25.075288  163433 logs.go:123] Gathering logs for kubelet ...
	I0921 21:59:25.075297  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0921 21:59:25.092668  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:35 kubernetes-upgrade-20220921215522-10174 kubelet[3932]: E0921 21:58:35.133679    3932 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:25.093155  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:35 kubernetes-upgrade-20220921215522-10174 kubelet[3942]: E0921 21:58:35.894996    3942 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:25.093586  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:36 kubernetes-upgrade-20220921215522-10174 kubelet[3952]: E0921 21:58:36.645679    3952 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:25.094014  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:37 kubernetes-upgrade-20220921215522-10174 kubelet[3962]: E0921 21:58:37.382599    3962 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:25.094455  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:38 kubernetes-upgrade-20220921215522-10174 kubelet[3972]: E0921 21:58:38.148834    3972 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:25.094889  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:38 kubernetes-upgrade-20220921215522-10174 kubelet[3982]: E0921 21:58:38.878739    3982 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:25.095311  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:39 kubernetes-upgrade-20220921215522-10174 kubelet[3993]: E0921 21:58:39.634330    3993 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:25.095949  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:40 kubernetes-upgrade-20220921215522-10174 kubelet[4003]: E0921 21:58:40.371691    4003 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:25.096399  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:41 kubernetes-upgrade-20220921215522-10174 kubelet[4015]: E0921 21:58:41.121950    4015 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:25.096826  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:41 kubernetes-upgrade-20220921215522-10174 kubelet[4026]: E0921 21:58:41.871405    4026 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:25.097291  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:42 kubernetes-upgrade-20220921215522-10174 kubelet[4150]: E0921 21:58:42.630550    4150 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:25.097736  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:43 kubernetes-upgrade-20220921215522-10174 kubelet[4185]: E0921 21:58:43.372417    4185 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:25.098287  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:44 kubernetes-upgrade-20220921215522-10174 kubelet[4197]: E0921 21:58:44.122124    4197 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:25.098745  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:44 kubernetes-upgrade-20220921215522-10174 kubelet[4209]: E0921 21:58:44.872708    4209 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:25.099303  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:45 kubernetes-upgrade-20220921215522-10174 kubelet[4219]: E0921 21:58:45.620684    4219 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:25.099973  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:46 kubernetes-upgrade-20220921215522-10174 kubelet[4230]: E0921 21:58:46.371087    4230 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:25.100428  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:47 kubernetes-upgrade-20220921215522-10174 kubelet[4243]: E0921 21:58:47.121413    4243 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:25.100852  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:47 kubernetes-upgrade-20220921215522-10174 kubelet[4254]: E0921 21:58:47.872018    4254 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:25.101336  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:48 kubernetes-upgrade-20220921215522-10174 kubelet[4264]: E0921 21:58:48.621704    4264 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:25.101816  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:49 kubernetes-upgrade-20220921215522-10174 kubelet[4275]: E0921 21:58:49.371878    4275 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:25.102368  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:50 kubernetes-upgrade-20220921215522-10174 kubelet[4286]: E0921 21:58:50.122084    4286 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:25.102904  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:50 kubernetes-upgrade-20220921215522-10174 kubelet[4297]: E0921 21:58:50.876452    4297 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:25.103553  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:51 kubernetes-upgrade-20220921215522-10174 kubelet[4307]: E0921 21:58:51.627411    4307 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:25.104130  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:52 kubernetes-upgrade-20220921215522-10174 kubelet[4318]: E0921 21:58:52.373933    4318 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:25.104527  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:53 kubernetes-upgrade-20220921215522-10174 kubelet[4436]: E0921 21:58:53.129656    4436 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:25.104940  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:53 kubernetes-upgrade-20220921215522-10174 kubelet[4480]: E0921 21:58:53.875326    4480 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:25.105401  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:54 kubernetes-upgrade-20220921215522-10174 kubelet[4491]: E0921 21:58:54.630207    4491 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:25.105865  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:55 kubernetes-upgrade-20220921215522-10174 kubelet[4505]: E0921 21:58:55.382351    4505 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:25.106302  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:56 kubernetes-upgrade-20220921215522-10174 kubelet[4515]: E0921 21:58:56.132997    4515 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:25.106708  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:56 kubernetes-upgrade-20220921215522-10174 kubelet[4524]: E0921 21:58:56.895570    4524 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:25.107116  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:57 kubernetes-upgrade-20220921215522-10174 kubelet[4533]: E0921 21:58:57.642639    4533 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:25.107506  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:58 kubernetes-upgrade-20220921215522-10174 kubelet[4545]: E0921 21:58:58.392797    4545 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:25.107963  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:59 kubernetes-upgrade-20220921215522-10174 kubelet[4554]: E0921 21:58:59.134963    4554 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:25.108361  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:59 kubernetes-upgrade-20220921215522-10174 kubelet[4566]: E0921 21:58:59.874689    4566 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:25.108746  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:00 kubernetes-upgrade-20220921215522-10174 kubelet[4577]: E0921 21:59:00.646186    4577 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:25.109154  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:01 kubernetes-upgrade-20220921215522-10174 kubelet[4587]: E0921 21:59:01.371592    4587 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:25.109547  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:02 kubernetes-upgrade-20220921215522-10174 kubelet[4598]: E0921 21:59:02.124561    4598 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:25.109950  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:02 kubernetes-upgrade-20220921215522-10174 kubelet[4610]: E0921 21:59:02.871389    4610 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:25.110350  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:03 kubernetes-upgrade-20220921215522-10174 kubelet[4727]: E0921 21:59:03.637908    4727 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:25.110741  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:04 kubernetes-upgrade-20220921215522-10174 kubelet[4764]: E0921 21:59:04.376354    4764 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:25.111140  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:05 kubernetes-upgrade-20220921215522-10174 kubelet[4774]: E0921 21:59:05.130669    4774 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:25.111527  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:05 kubernetes-upgrade-20220921215522-10174 kubelet[4786]: E0921 21:59:05.874316    4786 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:25.111945  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:06 kubernetes-upgrade-20220921215522-10174 kubelet[4797]: E0921 21:59:06.621813    4797 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:25.112383  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:07 kubernetes-upgrade-20220921215522-10174 kubelet[4808]: E0921 21:59:07.380124    4808 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:25.112788  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:08 kubernetes-upgrade-20220921215522-10174 kubelet[4819]: E0921 21:59:08.126226    4819 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:25.113195  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:08 kubernetes-upgrade-20220921215522-10174 kubelet[4830]: E0921 21:59:08.872713    4830 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:25.113588  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:09 kubernetes-upgrade-20220921215522-10174 kubelet[4840]: E0921 21:59:09.623509    4840 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:25.113982  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:10 kubernetes-upgrade-20220921215522-10174 kubelet[4850]: E0921 21:59:10.374150    4850 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:25.114407  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:11 kubernetes-upgrade-20220921215522-10174 kubelet[4861]: E0921 21:59:11.139623    4861 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:25.114807  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:11 kubernetes-upgrade-20220921215522-10174 kubelet[4872]: E0921 21:59:11.874420    4872 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:25.115196  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:12 kubernetes-upgrade-20220921215522-10174 kubelet[4883]: E0921 21:59:12.643620    4883 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:25.115624  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:13 kubernetes-upgrade-20220921215522-10174 kubelet[4893]: E0921 21:59:13.393673    4893 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:25.116047  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:14 kubernetes-upgrade-20220921215522-10174 kubelet[4988]: E0921 21:59:14.157483    4988 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:25.116444  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:14 kubernetes-upgrade-20220921215522-10174 kubelet[5049]: E0921 21:59:14.875596    5049 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:25.116838  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:15 kubernetes-upgrade-20220921215522-10174 kubelet[5060]: E0921 21:59:15.632288    5060 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:25.117230  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:16 kubernetes-upgrade-20220921215522-10174 kubelet[5071]: E0921 21:59:16.381461    5071 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:25.117643  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:17 kubernetes-upgrade-20220921215522-10174 kubelet[5082]: E0921 21:59:17.132884    5082 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:25.118148  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:17 kubernetes-upgrade-20220921215522-10174 kubelet[5093]: E0921 21:59:17.876596    5093 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:25.118805  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:18 kubernetes-upgrade-20220921215522-10174 kubelet[5104]: E0921 21:59:18.633938    5104 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:25.119238  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:19 kubernetes-upgrade-20220921215522-10174 kubelet[5115]: E0921 21:59:19.373556    5115 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:25.119633  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:20 kubernetes-upgrade-20220921215522-10174 kubelet[5125]: E0921 21:59:20.124415    5125 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:25.120086  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:20 kubernetes-upgrade-20220921215522-10174 kubelet[5136]: E0921 21:59:20.873303    5136 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:25.120485  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:21 kubernetes-upgrade-20220921215522-10174 kubelet[5147]: E0921 21:59:21.625541    5147 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:25.120888  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:22 kubernetes-upgrade-20220921215522-10174 kubelet[5158]: E0921 21:59:22.381102    5158 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:25.121289  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:23 kubernetes-upgrade-20220921215522-10174 kubelet[5170]: E0921 21:59:23.127688    5170 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:25.121680  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:23 kubernetes-upgrade-20220921215522-10174 kubelet[5181]: E0921 21:59:23.885323    5181 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:25.122128  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:24 kubernetes-upgrade-20220921215522-10174 kubelet[5194]: E0921 21:59:24.636411    5194 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0921 21:59:25.122261  163433 logs.go:123] Gathering logs for dmesg ...
	I0921 21:59:25.122278  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0921 21:59:25.137915  163433 logs.go:123] Gathering logs for describe nodes ...
	I0921 21:59:25.137947  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0921 21:59:25.200414  163433 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0921 21:59:25.200436  163433 logs.go:123] Gathering logs for containerd ...
	I0921 21:59:25.200448  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0921 21:59:25.236786  163433 logs.go:123] Gathering logs for container status ...
	I0921 21:59:25.236821  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0921 21:59:25.261691  163433 out.go:309] Setting ErrFile to fd 2...
	I0921 21:59:25.261716  163433 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0921 21:59:25.261828  163433 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0921 21:59:25.261845  163433 out.go:239]   Sep 21 21:59:21 kubernetes-upgrade-20220921215522-10174 kubelet[5147]: E0921 21:59:21.625541    5147 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Sep 21 21:59:21 kubernetes-upgrade-20220921215522-10174 kubelet[5147]: E0921 21:59:21.625541    5147 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:25.261853  163433 out.go:239]   Sep 21 21:59:22 kubernetes-upgrade-20220921215522-10174 kubelet[5158]: E0921 21:59:22.381102    5158 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Sep 21 21:59:22 kubernetes-upgrade-20220921215522-10174 kubelet[5158]: E0921 21:59:22.381102    5158 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:25.261866  163433 out.go:239]   Sep 21 21:59:23 kubernetes-upgrade-20220921215522-10174 kubelet[5170]: E0921 21:59:23.127688    5170 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Sep 21 21:59:23 kubernetes-upgrade-20220921215522-10174 kubelet[5170]: E0921 21:59:23.127688    5170 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:25.261877  163433 out.go:239]   Sep 21 21:59:23 kubernetes-upgrade-20220921215522-10174 kubelet[5181]: E0921 21:59:23.885323    5181 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Sep 21 21:59:23 kubernetes-upgrade-20220921215522-10174 kubelet[5181]: E0921 21:59:23.885323    5181 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:25.261886  163433 out.go:239]   Sep 21 21:59:24 kubernetes-upgrade-20220921215522-10174 kubelet[5194]: E0921 21:59:24.636411    5194 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Sep 21 21:59:24 kubernetes-upgrade-20220921215522-10174 kubelet[5194]: E0921 21:59:24.636411    5194 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0921 21:59:25.261890  163433 out.go:309] Setting ErrFile to fd 2...
	I0921 21:59:25.261895  163433 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 21:59:35.263224  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:59:35.362980  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0921 21:59:35.363055  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0921 21:59:35.386410  163433 cri.go:87] found id: ""
	I0921 21:59:35.386436  163433 logs.go:274] 0 containers: []
	W0921 21:59:35.386444  163433 logs.go:276] No container was found matching "kube-apiserver"
	I0921 21:59:35.386452  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0921 21:59:35.386505  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0921 21:59:35.409479  163433 cri.go:87] found id: ""
	I0921 21:59:35.409509  163433 logs.go:274] 0 containers: []
	W0921 21:59:35.409519  163433 logs.go:276] No container was found matching "etcd"
	I0921 21:59:35.409527  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0921 21:59:35.409581  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0921 21:59:35.433409  163433 cri.go:87] found id: ""
	I0921 21:59:35.433457  163433 logs.go:274] 0 containers: []
	W0921 21:59:35.433467  163433 logs.go:276] No container was found matching "coredns"
	I0921 21:59:35.433476  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0921 21:59:35.433532  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0921 21:59:35.456657  163433 cri.go:87] found id: ""
	I0921 21:59:35.456688  163433 logs.go:274] 0 containers: []
	W0921 21:59:35.456696  163433 logs.go:276] No container was found matching "kube-scheduler"
	I0921 21:59:35.456702  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0921 21:59:35.456746  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0921 21:59:35.479638  163433 cri.go:87] found id: ""
	I0921 21:59:35.479663  163433 logs.go:274] 0 containers: []
	W0921 21:59:35.479671  163433 logs.go:276] No container was found matching "kube-proxy"
	I0921 21:59:35.479679  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0921 21:59:35.479764  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0921 21:59:35.503409  163433 cri.go:87] found id: ""
	I0921 21:59:35.503432  163433 logs.go:274] 0 containers: []
	W0921 21:59:35.503438  163433 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0921 21:59:35.503444  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0921 21:59:35.503484  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0921 21:59:35.527852  163433 cri.go:87] found id: ""
	I0921 21:59:35.527884  163433 logs.go:274] 0 containers: []
	W0921 21:59:35.527893  163433 logs.go:276] No container was found matching "storage-provisioner"
	I0921 21:59:35.527902  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0921 21:59:35.527958  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0921 21:59:35.552383  163433 cri.go:87] found id: ""
	I0921 21:59:35.552405  163433 logs.go:274] 0 containers: []
	W0921 21:59:35.552411  163433 logs.go:276] No container was found matching "kube-controller-manager"
	I0921 21:59:35.552420  163433 logs.go:123] Gathering logs for kubelet ...
	I0921 21:59:35.552435  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0921 21:59:35.568142  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:45 kubernetes-upgrade-20220921215522-10174 kubelet[4219]: E0921 21:58:45.620684    4219 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:35.568545  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:46 kubernetes-upgrade-20220921215522-10174 kubelet[4230]: E0921 21:58:46.371087    4230 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:35.568942  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:47 kubernetes-upgrade-20220921215522-10174 kubelet[4243]: E0921 21:58:47.121413    4243 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:35.569327  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:47 kubernetes-upgrade-20220921215522-10174 kubelet[4254]: E0921 21:58:47.872018    4254 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:35.569713  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:48 kubernetes-upgrade-20220921215522-10174 kubelet[4264]: E0921 21:58:48.621704    4264 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:35.570092  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:49 kubernetes-upgrade-20220921215522-10174 kubelet[4275]: E0921 21:58:49.371878    4275 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:35.570470  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:50 kubernetes-upgrade-20220921215522-10174 kubelet[4286]: E0921 21:58:50.122084    4286 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:35.570851  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:50 kubernetes-upgrade-20220921215522-10174 kubelet[4297]: E0921 21:58:50.876452    4297 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:35.571237  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:51 kubernetes-upgrade-20220921215522-10174 kubelet[4307]: E0921 21:58:51.627411    4307 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:35.571613  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:52 kubernetes-upgrade-20220921215522-10174 kubelet[4318]: E0921 21:58:52.373933    4318 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:35.572063  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:53 kubernetes-upgrade-20220921215522-10174 kubelet[4436]: E0921 21:58:53.129656    4436 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:35.572450  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:53 kubernetes-upgrade-20220921215522-10174 kubelet[4480]: E0921 21:58:53.875326    4480 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:35.572830  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:54 kubernetes-upgrade-20220921215522-10174 kubelet[4491]: E0921 21:58:54.630207    4491 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:35.573209  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:55 kubernetes-upgrade-20220921215522-10174 kubelet[4505]: E0921 21:58:55.382351    4505 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:35.573586  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:56 kubernetes-upgrade-20220921215522-10174 kubelet[4515]: E0921 21:58:56.132997    4515 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:35.573966  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:56 kubernetes-upgrade-20220921215522-10174 kubelet[4524]: E0921 21:58:56.895570    4524 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:35.574354  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:57 kubernetes-upgrade-20220921215522-10174 kubelet[4533]: E0921 21:58:57.642639    4533 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:35.574740  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:58 kubernetes-upgrade-20220921215522-10174 kubelet[4545]: E0921 21:58:58.392797    4545 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:35.575121  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:59 kubernetes-upgrade-20220921215522-10174 kubelet[4554]: E0921 21:58:59.134963    4554 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:35.575510  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:59 kubernetes-upgrade-20220921215522-10174 kubelet[4566]: E0921 21:58:59.874689    4566 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:35.575925  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:00 kubernetes-upgrade-20220921215522-10174 kubelet[4577]: E0921 21:59:00.646186    4577 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:35.576302  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:01 kubernetes-upgrade-20220921215522-10174 kubelet[4587]: E0921 21:59:01.371592    4587 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:35.576675  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:02 kubernetes-upgrade-20220921215522-10174 kubelet[4598]: E0921 21:59:02.124561    4598 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:35.577073  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:02 kubernetes-upgrade-20220921215522-10174 kubelet[4610]: E0921 21:59:02.871389    4610 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:35.577462  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:03 kubernetes-upgrade-20220921215522-10174 kubelet[4727]: E0921 21:59:03.637908    4727 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:35.577843  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:04 kubernetes-upgrade-20220921215522-10174 kubelet[4764]: E0921 21:59:04.376354    4764 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:35.578231  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:05 kubernetes-upgrade-20220921215522-10174 kubelet[4774]: E0921 21:59:05.130669    4774 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:35.578613  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:05 kubernetes-upgrade-20220921215522-10174 kubelet[4786]: E0921 21:59:05.874316    4786 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:35.578996  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:06 kubernetes-upgrade-20220921215522-10174 kubelet[4797]: E0921 21:59:06.621813    4797 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:35.579378  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:07 kubernetes-upgrade-20220921215522-10174 kubelet[4808]: E0921 21:59:07.380124    4808 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:35.579842  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:08 kubernetes-upgrade-20220921215522-10174 kubelet[4819]: E0921 21:59:08.126226    4819 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:35.580238  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:08 kubernetes-upgrade-20220921215522-10174 kubelet[4830]: E0921 21:59:08.872713    4830 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:35.580613  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:09 kubernetes-upgrade-20220921215522-10174 kubelet[4840]: E0921 21:59:09.623509    4840 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:35.581021  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:10 kubernetes-upgrade-20220921215522-10174 kubelet[4850]: E0921 21:59:10.374150    4850 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:35.581407  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:11 kubernetes-upgrade-20220921215522-10174 kubelet[4861]: E0921 21:59:11.139623    4861 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:35.581784  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:11 kubernetes-upgrade-20220921215522-10174 kubelet[4872]: E0921 21:59:11.874420    4872 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:35.582154  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:12 kubernetes-upgrade-20220921215522-10174 kubelet[4883]: E0921 21:59:12.643620    4883 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:35.582645  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:13 kubernetes-upgrade-20220921215522-10174 kubelet[4893]: E0921 21:59:13.393673    4893 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:35.583026  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:14 kubernetes-upgrade-20220921215522-10174 kubelet[4988]: E0921 21:59:14.157483    4988 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:35.583443  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:14 kubernetes-upgrade-20220921215522-10174 kubelet[5049]: E0921 21:59:14.875596    5049 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:35.583923  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:15 kubernetes-upgrade-20220921215522-10174 kubelet[5060]: E0921 21:59:15.632288    5060 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:35.584306  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:16 kubernetes-upgrade-20220921215522-10174 kubelet[5071]: E0921 21:59:16.381461    5071 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:35.584686  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:17 kubernetes-upgrade-20220921215522-10174 kubelet[5082]: E0921 21:59:17.132884    5082 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:35.585067  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:17 kubernetes-upgrade-20220921215522-10174 kubelet[5093]: E0921 21:59:17.876596    5093 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:35.585453  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:18 kubernetes-upgrade-20220921215522-10174 kubelet[5104]: E0921 21:59:18.633938    5104 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:35.585840  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:19 kubernetes-upgrade-20220921215522-10174 kubelet[5115]: E0921 21:59:19.373556    5115 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:35.586229  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:20 kubernetes-upgrade-20220921215522-10174 kubelet[5125]: E0921 21:59:20.124415    5125 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:35.586607  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:20 kubernetes-upgrade-20220921215522-10174 kubelet[5136]: E0921 21:59:20.873303    5136 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:35.586989  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:21 kubernetes-upgrade-20220921215522-10174 kubelet[5147]: E0921 21:59:21.625541    5147 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:35.587380  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:22 kubernetes-upgrade-20220921215522-10174 kubelet[5158]: E0921 21:59:22.381102    5158 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:35.587789  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:23 kubernetes-upgrade-20220921215522-10174 kubelet[5170]: E0921 21:59:23.127688    5170 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:35.588178  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:23 kubernetes-upgrade-20220921215522-10174 kubelet[5181]: E0921 21:59:23.885323    5181 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:35.588556  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:24 kubernetes-upgrade-20220921215522-10174 kubelet[5194]: E0921 21:59:24.636411    5194 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:35.588940  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:25 kubernetes-upgrade-20220921215522-10174 kubelet[5337]: E0921 21:59:25.370164    5337 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:35.589334  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:26 kubernetes-upgrade-20220921215522-10174 kubelet[5348]: E0921 21:59:26.124915    5348 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:35.589723  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:26 kubernetes-upgrade-20220921215522-10174 kubelet[5359]: E0921 21:59:26.873809    5359 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:35.590103  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:27 kubernetes-upgrade-20220921215522-10174 kubelet[5370]: E0921 21:59:27.628142    5370 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:35.590504  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:28 kubernetes-upgrade-20220921215522-10174 kubelet[5382]: E0921 21:59:28.390222    5382 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:35.590886  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:29 kubernetes-upgrade-20220921215522-10174 kubelet[5393]: E0921 21:59:29.137158    5393 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:35.591261  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:29 kubernetes-upgrade-20220921215522-10174 kubelet[5402]: E0921 21:59:29.876549    5402 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:35.591656  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:30 kubernetes-upgrade-20220921215522-10174 kubelet[5412]: E0921 21:59:30.626109    5412 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:35.592093  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:31 kubernetes-upgrade-20220921215522-10174 kubelet[5423]: E0921 21:59:31.374600    5423 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:35.592482  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:32 kubernetes-upgrade-20220921215522-10174 kubelet[5435]: E0921 21:59:32.145394    5435 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:35.592860  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:32 kubernetes-upgrade-20220921215522-10174 kubelet[5446]: E0921 21:59:32.870591    5446 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:35.593241  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:33 kubernetes-upgrade-20220921215522-10174 kubelet[5457]: E0921 21:59:33.620542    5457 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:35.593620  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:34 kubernetes-upgrade-20220921215522-10174 kubelet[5468]: E0921 21:59:34.371200    5468 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:35.594034  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:35 kubernetes-upgrade-20220921215522-10174 kubelet[5479]: E0921 21:59:35.124735    5479 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0921 21:59:35.594196  163433 logs.go:123] Gathering logs for dmesg ...
	I0921 21:59:35.594212  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0921 21:59:35.608913  163433 logs.go:123] Gathering logs for describe nodes ...
	I0921 21:59:35.608939  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0921 21:59:35.661660  163433 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0921 21:59:35.661689  163433 logs.go:123] Gathering logs for containerd ...
	I0921 21:59:35.661701  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0921 21:59:35.698240  163433 logs.go:123] Gathering logs for container status ...
	I0921 21:59:35.698271  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0921 21:59:35.724050  163433 out.go:309] Setting ErrFile to fd 2...
	I0921 21:59:35.724077  163433 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0921 21:59:35.724232  163433 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0921 21:59:35.724252  163433 out.go:239]   Sep 21 21:59:32 kubernetes-upgrade-20220921215522-10174 kubelet[5435]: E0921 21:59:32.145394    5435 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Sep 21 21:59:32 kubernetes-upgrade-20220921215522-10174 kubelet[5435]: E0921 21:59:32.145394    5435 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:35.724263  163433 out.go:239]   Sep 21 21:59:32 kubernetes-upgrade-20220921215522-10174 kubelet[5446]: E0921 21:59:32.870591    5446 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Sep 21 21:59:32 kubernetes-upgrade-20220921215522-10174 kubelet[5446]: E0921 21:59:32.870591    5446 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:35.724269  163433 out.go:239]   Sep 21 21:59:33 kubernetes-upgrade-20220921215522-10174 kubelet[5457]: E0921 21:59:33.620542    5457 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Sep 21 21:59:33 kubernetes-upgrade-20220921215522-10174 kubelet[5457]: E0921 21:59:33.620542    5457 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:35.724274  163433 out.go:239]   Sep 21 21:59:34 kubernetes-upgrade-20220921215522-10174 kubelet[5468]: E0921 21:59:34.371200    5468 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Sep 21 21:59:34 kubernetes-upgrade-20220921215522-10174 kubelet[5468]: E0921 21:59:34.371200    5468 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:35.724279  163433 out.go:239]   Sep 21 21:59:35 kubernetes-upgrade-20220921215522-10174 kubelet[5479]: E0921 21:59:35.124735    5479 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Sep 21 21:59:35 kubernetes-upgrade-20220921215522-10174 kubelet[5479]: E0921 21:59:35.124735    5479 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0921 21:59:35.724283  163433 out.go:309] Setting ErrFile to fd 2...
	I0921 21:59:35.724288  163433 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 21:59:45.725263  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:59:45.863196  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0921 21:59:45.863285  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0921 21:59:45.888025  163433 cri.go:87] found id: ""
	I0921 21:59:45.888053  163433 logs.go:274] 0 containers: []
	W0921 21:59:45.888060  163433 logs.go:276] No container was found matching "kube-apiserver"
	I0921 21:59:45.888066  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0921 21:59:45.888126  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0921 21:59:45.910603  163433 cri.go:87] found id: ""
	I0921 21:59:45.910635  163433 logs.go:274] 0 containers: []
	W0921 21:59:45.910644  163433 logs.go:276] No container was found matching "etcd"
	I0921 21:59:45.910651  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0921 21:59:45.910695  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0921 21:59:45.934248  163433 cri.go:87] found id: ""
	I0921 21:59:45.934274  163433 logs.go:274] 0 containers: []
	W0921 21:59:45.934283  163433 logs.go:276] No container was found matching "coredns"
	I0921 21:59:45.934291  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0921 21:59:45.934348  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0921 21:59:45.956614  163433 cri.go:87] found id: ""
	I0921 21:59:45.956643  163433 logs.go:274] 0 containers: []
	W0921 21:59:45.956653  163433 logs.go:276] No container was found matching "kube-scheduler"
	I0921 21:59:45.956659  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0921 21:59:45.956703  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0921 21:59:45.978721  163433 cri.go:87] found id: ""
	I0921 21:59:45.978748  163433 logs.go:274] 0 containers: []
	W0921 21:59:45.978756  163433 logs.go:276] No container was found matching "kube-proxy"
	I0921 21:59:45.978763  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0921 21:59:45.978831  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0921 21:59:46.005441  163433 cri.go:87] found id: ""
	I0921 21:59:46.005480  163433 logs.go:274] 0 containers: []
	W0921 21:59:46.005490  163433 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0921 21:59:46.005498  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0921 21:59:46.005548  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0921 21:59:46.029762  163433 cri.go:87] found id: ""
	I0921 21:59:46.029788  163433 logs.go:274] 0 containers: []
	W0921 21:59:46.029794  163433 logs.go:276] No container was found matching "storage-provisioner"
	I0921 21:59:46.029799  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0921 21:59:46.029853  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0921 21:59:46.054492  163433 cri.go:87] found id: ""
	I0921 21:59:46.054520  163433 logs.go:274] 0 containers: []
	W0921 21:59:46.054527  163433 logs.go:276] No container was found matching "kube-controller-manager"
	I0921 21:59:46.054536  163433 logs.go:123] Gathering logs for kubelet ...
	I0921 21:59:46.054545  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0921 21:59:46.071489  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:56 kubernetes-upgrade-20220921215522-10174 kubelet[4515]: E0921 21:58:56.132997    4515 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:46.071963  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:56 kubernetes-upgrade-20220921215522-10174 kubelet[4524]: E0921 21:58:56.895570    4524 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:46.072364  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:57 kubernetes-upgrade-20220921215522-10174 kubelet[4533]: E0921 21:58:57.642639    4533 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:46.072764  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:58 kubernetes-upgrade-20220921215522-10174 kubelet[4545]: E0921 21:58:58.392797    4545 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:46.073157  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:59 kubernetes-upgrade-20220921215522-10174 kubelet[4554]: E0921 21:58:59.134963    4554 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:46.073541  163433 logs.go:138] Found kubelet problem: Sep 21 21:58:59 kubernetes-upgrade-20220921215522-10174 kubelet[4566]: E0921 21:58:59.874689    4566 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:46.073922  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:00 kubernetes-upgrade-20220921215522-10174 kubelet[4577]: E0921 21:59:00.646186    4577 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:46.074333  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:01 kubernetes-upgrade-20220921215522-10174 kubelet[4587]: E0921 21:59:01.371592    4587 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:46.074725  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:02 kubernetes-upgrade-20220921215522-10174 kubelet[4598]: E0921 21:59:02.124561    4598 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:46.075109  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:02 kubernetes-upgrade-20220921215522-10174 kubelet[4610]: E0921 21:59:02.871389    4610 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:46.075506  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:03 kubernetes-upgrade-20220921215522-10174 kubelet[4727]: E0921 21:59:03.637908    4727 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:46.075927  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:04 kubernetes-upgrade-20220921215522-10174 kubelet[4764]: E0921 21:59:04.376354    4764 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:46.076403  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:05 kubernetes-upgrade-20220921215522-10174 kubelet[4774]: E0921 21:59:05.130669    4774 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:46.076802  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:05 kubernetes-upgrade-20220921215522-10174 kubelet[4786]: E0921 21:59:05.874316    4786 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:46.077187  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:06 kubernetes-upgrade-20220921215522-10174 kubelet[4797]: E0921 21:59:06.621813    4797 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:46.077614  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:07 kubernetes-upgrade-20220921215522-10174 kubelet[4808]: E0921 21:59:07.380124    4808 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:46.078027  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:08 kubernetes-upgrade-20220921215522-10174 kubelet[4819]: E0921 21:59:08.126226    4819 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:46.078404  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:08 kubernetes-upgrade-20220921215522-10174 kubelet[4830]: E0921 21:59:08.872713    4830 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:46.078813  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:09 kubernetes-upgrade-20220921215522-10174 kubelet[4840]: E0921 21:59:09.623509    4840 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:46.079208  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:10 kubernetes-upgrade-20220921215522-10174 kubelet[4850]: E0921 21:59:10.374150    4850 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:46.079629  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:11 kubernetes-upgrade-20220921215522-10174 kubelet[4861]: E0921 21:59:11.139623    4861 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:46.080047  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:11 kubernetes-upgrade-20220921215522-10174 kubelet[4872]: E0921 21:59:11.874420    4872 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:46.080439  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:12 kubernetes-upgrade-20220921215522-10174 kubelet[4883]: E0921 21:59:12.643620    4883 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:46.080847  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:13 kubernetes-upgrade-20220921215522-10174 kubelet[4893]: E0921 21:59:13.393673    4893 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:46.081244  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:14 kubernetes-upgrade-20220921215522-10174 kubelet[4988]: E0921 21:59:14.157483    4988 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:46.081627  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:14 kubernetes-upgrade-20220921215522-10174 kubelet[5049]: E0921 21:59:14.875596    5049 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:46.082033  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:15 kubernetes-upgrade-20220921215522-10174 kubelet[5060]: E0921 21:59:15.632288    5060 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:46.082440  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:16 kubernetes-upgrade-20220921215522-10174 kubelet[5071]: E0921 21:59:16.381461    5071 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:46.082830  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:17 kubernetes-upgrade-20220921215522-10174 kubelet[5082]: E0921 21:59:17.132884    5082 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:46.083234  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:17 kubernetes-upgrade-20220921215522-10174 kubelet[5093]: E0921 21:59:17.876596    5093 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:46.083657  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:18 kubernetes-upgrade-20220921215522-10174 kubelet[5104]: E0921 21:59:18.633938    5104 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:46.084123  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:19 kubernetes-upgrade-20220921215522-10174 kubelet[5115]: E0921 21:59:19.373556    5115 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:46.084503  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:20 kubernetes-upgrade-20220921215522-10174 kubelet[5125]: E0921 21:59:20.124415    5125 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:46.084908  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:20 kubernetes-upgrade-20220921215522-10174 kubelet[5136]: E0921 21:59:20.873303    5136 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:46.085295  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:21 kubernetes-upgrade-20220921215522-10174 kubelet[5147]: E0921 21:59:21.625541    5147 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:46.085678  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:22 kubernetes-upgrade-20220921215522-10174 kubelet[5158]: E0921 21:59:22.381102    5158 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:46.086084  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:23 kubernetes-upgrade-20220921215522-10174 kubelet[5170]: E0921 21:59:23.127688    5170 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:46.086490  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:23 kubernetes-upgrade-20220921215522-10174 kubelet[5181]: E0921 21:59:23.885323    5181 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:46.086877  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:24 kubernetes-upgrade-20220921215522-10174 kubelet[5194]: E0921 21:59:24.636411    5194 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:46.087274  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:25 kubernetes-upgrade-20220921215522-10174 kubelet[5337]: E0921 21:59:25.370164    5337 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:46.087662  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:26 kubernetes-upgrade-20220921215522-10174 kubelet[5348]: E0921 21:59:26.124915    5348 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:46.088082  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:26 kubernetes-upgrade-20220921215522-10174 kubelet[5359]: E0921 21:59:26.873809    5359 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:46.088489  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:27 kubernetes-upgrade-20220921215522-10174 kubelet[5370]: E0921 21:59:27.628142    5370 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:46.088873  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:28 kubernetes-upgrade-20220921215522-10174 kubelet[5382]: E0921 21:59:28.390222    5382 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:46.089268  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:29 kubernetes-upgrade-20220921215522-10174 kubelet[5393]: E0921 21:59:29.137158    5393 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:46.089672  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:29 kubernetes-upgrade-20220921215522-10174 kubelet[5402]: E0921 21:59:29.876549    5402 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:46.090059  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:30 kubernetes-upgrade-20220921215522-10174 kubelet[5412]: E0921 21:59:30.626109    5412 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:46.090437  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:31 kubernetes-upgrade-20220921215522-10174 kubelet[5423]: E0921 21:59:31.374600    5423 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:46.090832  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:32 kubernetes-upgrade-20220921215522-10174 kubelet[5435]: E0921 21:59:32.145394    5435 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:46.091270  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:32 kubernetes-upgrade-20220921215522-10174 kubelet[5446]: E0921 21:59:32.870591    5446 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:46.091648  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:33 kubernetes-upgrade-20220921215522-10174 kubelet[5457]: E0921 21:59:33.620542    5457 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:46.092083  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:34 kubernetes-upgrade-20220921215522-10174 kubelet[5468]: E0921 21:59:34.371200    5468 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:46.092459  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:35 kubernetes-upgrade-20220921215522-10174 kubelet[5479]: E0921 21:59:35.124735    5479 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:46.092877  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:35 kubernetes-upgrade-20220921215522-10174 kubelet[5630]: E0921 21:59:35.871827    5630 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:46.093257  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:36 kubernetes-upgrade-20220921215522-10174 kubelet[5641]: E0921 21:59:36.621845    5641 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:46.093648  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:37 kubernetes-upgrade-20220921215522-10174 kubelet[5652]: E0921 21:59:37.378142    5652 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:46.094039  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:38 kubernetes-upgrade-20220921215522-10174 kubelet[5663]: E0921 21:59:38.135567    5663 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:46.094431  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:38 kubernetes-upgrade-20220921215522-10174 kubelet[5674]: E0921 21:59:38.874212    5674 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:46.094827  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:39 kubernetes-upgrade-20220921215522-10174 kubelet[5685]: E0921 21:59:39.623075    5685 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:46.095214  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:40 kubernetes-upgrade-20220921215522-10174 kubelet[5696]: E0921 21:59:40.374289    5696 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:46.095594  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:41 kubernetes-upgrade-20220921215522-10174 kubelet[5708]: E0921 21:59:41.126179    5708 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:46.096006  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:41 kubernetes-upgrade-20220921215522-10174 kubelet[5719]: E0921 21:59:41.879591    5719 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:46.096405  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:42 kubernetes-upgrade-20220921215522-10174 kubelet[5729]: E0921 21:59:42.633853    5729 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:46.096801  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:43 kubernetes-upgrade-20220921215522-10174 kubelet[5740]: E0921 21:59:43.387451    5740 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:46.097204  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:44 kubernetes-upgrade-20220921215522-10174 kubelet[5752]: E0921 21:59:44.128455    5752 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:46.097605  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:44 kubernetes-upgrade-20220921215522-10174 kubelet[5762]: E0921 21:59:44.874407    5762 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:46.098014  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:45 kubernetes-upgrade-20220921215522-10174 kubelet[5774]: E0921 21:59:45.623741    5774 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0921 21:59:46.098145  163433 logs.go:123] Gathering logs for dmesg ...
	I0921 21:59:46.098163  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0921 21:59:46.114435  163433 logs.go:123] Gathering logs for describe nodes ...
	I0921 21:59:46.114473  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0921 21:59:46.168644  163433 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0921 21:59:46.168667  163433 logs.go:123] Gathering logs for containerd ...
	I0921 21:59:46.168679  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0921 21:59:46.204920  163433 logs.go:123] Gathering logs for container status ...
	I0921 21:59:46.204953  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0921 21:59:46.231338  163433 out.go:309] Setting ErrFile to fd 2...
	I0921 21:59:46.231372  163433 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0921 21:59:46.231484  163433 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0921 21:59:46.231501  163433 out.go:239]   Sep 21 21:59:42 kubernetes-upgrade-20220921215522-10174 kubelet[5729]: E0921 21:59:42.633853    5729 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Sep 21 21:59:42 kubernetes-upgrade-20220921215522-10174 kubelet[5729]: E0921 21:59:42.633853    5729 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:46.231509  163433 out.go:239]   Sep 21 21:59:43 kubernetes-upgrade-20220921215522-10174 kubelet[5740]: E0921 21:59:43.387451    5740 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Sep 21 21:59:43 kubernetes-upgrade-20220921215522-10174 kubelet[5740]: E0921 21:59:43.387451    5740 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:46.231522  163433 out.go:239]   Sep 21 21:59:44 kubernetes-upgrade-20220921215522-10174 kubelet[5752]: E0921 21:59:44.128455    5752 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Sep 21 21:59:44 kubernetes-upgrade-20220921215522-10174 kubelet[5752]: E0921 21:59:44.128455    5752 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:46.231533  163433 out.go:239]   Sep 21 21:59:44 kubernetes-upgrade-20220921215522-10174 kubelet[5762]: E0921 21:59:44.874407    5762 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Sep 21 21:59:44 kubernetes-upgrade-20220921215522-10174 kubelet[5762]: E0921 21:59:44.874407    5762 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:46.231544  163433 out.go:239]   Sep 21 21:59:45 kubernetes-upgrade-20220921215522-10174 kubelet[5774]: E0921 21:59:45.623741    5774 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Sep 21 21:59:45 kubernetes-upgrade-20220921215522-10174 kubelet[5774]: E0921 21:59:45.623741    5774 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0921 21:59:46.231556  163433 out.go:309] Setting ErrFile to fd 2...
	I0921 21:59:46.231568  163433 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 21:59:56.232362  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:59:56.363254  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0921 21:59:56.363320  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0921 21:59:56.386079  163433 cri.go:87] found id: ""
	I0921 21:59:56.386105  163433 logs.go:274] 0 containers: []
	W0921 21:59:56.386112  163433 logs.go:276] No container was found matching "kube-apiserver"
	I0921 21:59:56.386119  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0921 21:59:56.386160  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0921 21:59:56.408178  163433 cri.go:87] found id: ""
	I0921 21:59:56.408206  163433 logs.go:274] 0 containers: []
	W0921 21:59:56.408215  163433 logs.go:276] No container was found matching "etcd"
	I0921 21:59:56.408222  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0921 21:59:56.408273  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0921 21:59:56.431215  163433 cri.go:87] found id: ""
	I0921 21:59:56.431240  163433 logs.go:274] 0 containers: []
	W0921 21:59:56.431246  163433 logs.go:276] No container was found matching "coredns"
	I0921 21:59:56.431251  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0921 21:59:56.431308  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0921 21:59:56.453885  163433 cri.go:87] found id: ""
	I0921 21:59:56.453913  163433 logs.go:274] 0 containers: []
	W0921 21:59:56.453919  163433 logs.go:276] No container was found matching "kube-scheduler"
	I0921 21:59:56.453925  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0921 21:59:56.453976  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0921 21:59:56.476089  163433 cri.go:87] found id: ""
	I0921 21:59:56.476110  163433 logs.go:274] 0 containers: []
	W0921 21:59:56.476116  163433 logs.go:276] No container was found matching "kube-proxy"
	I0921 21:59:56.476121  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0921 21:59:56.476162  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0921 21:59:56.498717  163433 cri.go:87] found id: ""
	I0921 21:59:56.498748  163433 logs.go:274] 0 containers: []
	W0921 21:59:56.498755  163433 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0921 21:59:56.498761  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0921 21:59:56.498803  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0921 21:59:56.521339  163433 cri.go:87] found id: ""
	I0921 21:59:56.521363  163433 logs.go:274] 0 containers: []
	W0921 21:59:56.521369  163433 logs.go:276] No container was found matching "storage-provisioner"
	I0921 21:59:56.521375  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0921 21:59:56.521415  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0921 21:59:56.544377  163433 cri.go:87] found id: ""
	I0921 21:59:56.544405  163433 logs.go:274] 0 containers: []
	W0921 21:59:56.544414  163433 logs.go:276] No container was found matching "kube-controller-manager"
	I0921 21:59:56.544426  163433 logs.go:123] Gathering logs for dmesg ...
	I0921 21:59:56.544443  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0921 21:59:56.558351  163433 logs.go:123] Gathering logs for describe nodes ...
	I0921 21:59:56.558375  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0921 21:59:56.615428  163433 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0921 21:59:56.615459  163433 logs.go:123] Gathering logs for containerd ...
	I0921 21:59:56.615473  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0921 21:59:56.658412  163433 logs.go:123] Gathering logs for container status ...
	I0921 21:59:56.658459  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0921 21:59:56.710314  163433 logs.go:123] Gathering logs for kubelet ...
	I0921 21:59:56.710351  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0921 21:59:56.728263  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:06 kubernetes-upgrade-20220921215522-10174 kubelet[4797]: E0921 21:59:06.621813    4797 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:56.728925  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:07 kubernetes-upgrade-20220921215522-10174 kubelet[4808]: E0921 21:59:07.380124    4808 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:56.729566  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:08 kubernetes-upgrade-20220921215522-10174 kubelet[4819]: E0921 21:59:08.126226    4819 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:56.730201  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:08 kubernetes-upgrade-20220921215522-10174 kubelet[4830]: E0921 21:59:08.872713    4830 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:56.730803  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:09 kubernetes-upgrade-20220921215522-10174 kubelet[4840]: E0921 21:59:09.623509    4840 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:56.731276  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:10 kubernetes-upgrade-20220921215522-10174 kubelet[4850]: E0921 21:59:10.374150    4850 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:56.731946  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:11 kubernetes-upgrade-20220921215522-10174 kubelet[4861]: E0921 21:59:11.139623    4861 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:56.732470  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:11 kubernetes-upgrade-20220921215522-10174 kubelet[4872]: E0921 21:59:11.874420    4872 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:56.732892  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:12 kubernetes-upgrade-20220921215522-10174 kubelet[4883]: E0921 21:59:12.643620    4883 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:56.733286  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:13 kubernetes-upgrade-20220921215522-10174 kubelet[4893]: E0921 21:59:13.393673    4893 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:56.733674  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:14 kubernetes-upgrade-20220921215522-10174 kubelet[4988]: E0921 21:59:14.157483    4988 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:56.734055  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:14 kubernetes-upgrade-20220921215522-10174 kubelet[5049]: E0921 21:59:14.875596    5049 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:56.734445  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:15 kubernetes-upgrade-20220921215522-10174 kubelet[5060]: E0921 21:59:15.632288    5060 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:56.734851  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:16 kubernetes-upgrade-20220921215522-10174 kubelet[5071]: E0921 21:59:16.381461    5071 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:56.735284  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:17 kubernetes-upgrade-20220921215522-10174 kubelet[5082]: E0921 21:59:17.132884    5082 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:56.735667  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:17 kubernetes-upgrade-20220921215522-10174 kubelet[5093]: E0921 21:59:17.876596    5093 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:56.736136  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:18 kubernetes-upgrade-20220921215522-10174 kubelet[5104]: E0921 21:59:18.633938    5104 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:56.736526  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:19 kubernetes-upgrade-20220921215522-10174 kubelet[5115]: E0921 21:59:19.373556    5115 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:56.737107  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:20 kubernetes-upgrade-20220921215522-10174 kubelet[5125]: E0921 21:59:20.124415    5125 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:56.737773  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:20 kubernetes-upgrade-20220921215522-10174 kubelet[5136]: E0921 21:59:20.873303    5136 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:56.738249  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:21 kubernetes-upgrade-20220921215522-10174 kubelet[5147]: E0921 21:59:21.625541    5147 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:56.738650  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:22 kubernetes-upgrade-20220921215522-10174 kubelet[5158]: E0921 21:59:22.381102    5158 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:56.739025  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:23 kubernetes-upgrade-20220921215522-10174 kubelet[5170]: E0921 21:59:23.127688    5170 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:56.739412  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:23 kubernetes-upgrade-20220921215522-10174 kubelet[5181]: E0921 21:59:23.885323    5181 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:56.739842  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:24 kubernetes-upgrade-20220921215522-10174 kubelet[5194]: E0921 21:59:24.636411    5194 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:56.740230  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:25 kubernetes-upgrade-20220921215522-10174 kubelet[5337]: E0921 21:59:25.370164    5337 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:56.740617  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:26 kubernetes-upgrade-20220921215522-10174 kubelet[5348]: E0921 21:59:26.124915    5348 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:56.741134  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:26 kubernetes-upgrade-20220921215522-10174 kubelet[5359]: E0921 21:59:26.873809    5359 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:56.741597  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:27 kubernetes-upgrade-20220921215522-10174 kubelet[5370]: E0921 21:59:27.628142    5370 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:56.742008  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:28 kubernetes-upgrade-20220921215522-10174 kubelet[5382]: E0921 21:59:28.390222    5382 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:56.742422  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:29 kubernetes-upgrade-20220921215522-10174 kubelet[5393]: E0921 21:59:29.137158    5393 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:56.742856  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:29 kubernetes-upgrade-20220921215522-10174 kubelet[5402]: E0921 21:59:29.876549    5402 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:56.743281  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:30 kubernetes-upgrade-20220921215522-10174 kubelet[5412]: E0921 21:59:30.626109    5412 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:56.743697  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:31 kubernetes-upgrade-20220921215522-10174 kubelet[5423]: E0921 21:59:31.374600    5423 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:56.744123  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:32 kubernetes-upgrade-20220921215522-10174 kubelet[5435]: E0921 21:59:32.145394    5435 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:56.744544  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:32 kubernetes-upgrade-20220921215522-10174 kubelet[5446]: E0921 21:59:32.870591    5446 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:56.744979  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:33 kubernetes-upgrade-20220921215522-10174 kubelet[5457]: E0921 21:59:33.620542    5457 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:56.745428  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:34 kubernetes-upgrade-20220921215522-10174 kubelet[5468]: E0921 21:59:34.371200    5468 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:56.745845  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:35 kubernetes-upgrade-20220921215522-10174 kubelet[5479]: E0921 21:59:35.124735    5479 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:56.746256  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:35 kubernetes-upgrade-20220921215522-10174 kubelet[5630]: E0921 21:59:35.871827    5630 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:56.746674  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:36 kubernetes-upgrade-20220921215522-10174 kubelet[5641]: E0921 21:59:36.621845    5641 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:56.747075  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:37 kubernetes-upgrade-20220921215522-10174 kubelet[5652]: E0921 21:59:37.378142    5652 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:56.747486  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:38 kubernetes-upgrade-20220921215522-10174 kubelet[5663]: E0921 21:59:38.135567    5663 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:56.747975  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:38 kubernetes-upgrade-20220921215522-10174 kubelet[5674]: E0921 21:59:38.874212    5674 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:56.748366  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:39 kubernetes-upgrade-20220921215522-10174 kubelet[5685]: E0921 21:59:39.623075    5685 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:56.748755  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:40 kubernetes-upgrade-20220921215522-10174 kubelet[5696]: E0921 21:59:40.374289    5696 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:56.749149  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:41 kubernetes-upgrade-20220921215522-10174 kubelet[5708]: E0921 21:59:41.126179    5708 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:56.749526  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:41 kubernetes-upgrade-20220921215522-10174 kubelet[5719]: E0921 21:59:41.879591    5719 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:56.749907  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:42 kubernetes-upgrade-20220921215522-10174 kubelet[5729]: E0921 21:59:42.633853    5729 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:56.750285  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:43 kubernetes-upgrade-20220921215522-10174 kubelet[5740]: E0921 21:59:43.387451    5740 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:56.750682  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:44 kubernetes-upgrade-20220921215522-10174 kubelet[5752]: E0921 21:59:44.128455    5752 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:56.751068  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:44 kubernetes-upgrade-20220921215522-10174 kubelet[5762]: E0921 21:59:44.874407    5762 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:56.751450  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:45 kubernetes-upgrade-20220921215522-10174 kubelet[5774]: E0921 21:59:45.623741    5774 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:56.751893  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:46 kubernetes-upgrade-20220921215522-10174 kubelet[5921]: E0921 21:59:46.369661    5921 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:56.752294  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:47 kubernetes-upgrade-20220921215522-10174 kubelet[5932]: E0921 21:59:47.127018    5932 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:56.752950  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:47 kubernetes-upgrade-20220921215522-10174 kubelet[5943]: E0921 21:59:47.882291    5943 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:56.753358  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:48 kubernetes-upgrade-20220921215522-10174 kubelet[5954]: E0921 21:59:48.659376    5954 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:56.753756  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:49 kubernetes-upgrade-20220921215522-10174 kubelet[5966]: E0921 21:59:49.395999    5966 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:56.754158  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:50 kubernetes-upgrade-20220921215522-10174 kubelet[5977]: E0921 21:59:50.139088    5977 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:56.754565  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:50 kubernetes-upgrade-20220921215522-10174 kubelet[5987]: E0921 21:59:50.872398    5987 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:56.754971  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:51 kubernetes-upgrade-20220921215522-10174 kubelet[5999]: E0921 21:59:51.621693    5999 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:56.755372  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:52 kubernetes-upgrade-20220921215522-10174 kubelet[6010]: E0921 21:59:52.378257    6010 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:56.755840  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:53 kubernetes-upgrade-20220921215522-10174 kubelet[6021]: E0921 21:59:53.122030    6021 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:56.756262  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:53 kubernetes-upgrade-20220921215522-10174 kubelet[6032]: E0921 21:59:53.883474    6032 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:56.756643  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:54 kubernetes-upgrade-20220921215522-10174 kubelet[6043]: E0921 21:59:54.623656    6043 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:56.757086  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:55 kubernetes-upgrade-20220921215522-10174 kubelet[6053]: E0921 21:59:55.373880    6053 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:56.757477  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:56 kubernetes-upgrade-20220921215522-10174 kubelet[6064]: E0921 21:59:56.122849    6064 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0921 21:59:56.757607  163433 out.go:309] Setting ErrFile to fd 2...
	I0921 21:59:56.757621  163433 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0921 21:59:56.757724  163433 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0921 21:59:56.757738  163433 out.go:239]   Sep 21 21:59:53 kubernetes-upgrade-20220921215522-10174 kubelet[6021]: E0921 21:59:53.122030    6021 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Sep 21 21:59:53 kubernetes-upgrade-20220921215522-10174 kubelet[6021]: E0921 21:59:53.122030    6021 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:56.757743  163433 out.go:239]   Sep 21 21:59:53 kubernetes-upgrade-20220921215522-10174 kubelet[6032]: E0921 21:59:53.883474    6032 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Sep 21 21:59:53 kubernetes-upgrade-20220921215522-10174 kubelet[6032]: E0921 21:59:53.883474    6032 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:56.757749  163433 out.go:239]   Sep 21 21:59:54 kubernetes-upgrade-20220921215522-10174 kubelet[6043]: E0921 21:59:54.623656    6043 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Sep 21 21:59:54 kubernetes-upgrade-20220921215522-10174 kubelet[6043]: E0921 21:59:54.623656    6043 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:56.757754  163433 out.go:239]   Sep 21 21:59:55 kubernetes-upgrade-20220921215522-10174 kubelet[6053]: E0921 21:59:55.373880    6053 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Sep 21 21:59:55 kubernetes-upgrade-20220921215522-10174 kubelet[6053]: E0921 21:59:55.373880    6053 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 21:59:56.757760  163433 out.go:239]   Sep 21 21:59:56 kubernetes-upgrade-20220921215522-10174 kubelet[6064]: E0921 21:59:56.122849    6064 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Sep 21 21:59:56 kubernetes-upgrade-20220921215522-10174 kubelet[6064]: E0921 21:59:56.122849    6064 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0921 21:59:56.757766  163433 out.go:309] Setting ErrFile to fd 2...
	I0921 21:59:56.757774  163433 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:00:06.759265  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 22:00:06.863333  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0921 22:00:06.863401  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0921 22:00:06.889775  163433 cri.go:87] found id: ""
	I0921 22:00:06.889804  163433 logs.go:274] 0 containers: []
	W0921 22:00:06.889814  163433 logs.go:276] No container was found matching "kube-apiserver"
	I0921 22:00:06.889822  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0921 22:00:06.889870  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0921 22:00:06.911963  163433 cri.go:87] found id: ""
	I0921 22:00:06.911994  163433 logs.go:274] 0 containers: []
	W0921 22:00:06.912003  163433 logs.go:276] No container was found matching "etcd"
	I0921 22:00:06.912011  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0921 22:00:06.912055  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0921 22:00:06.934612  163433 cri.go:87] found id: ""
	I0921 22:00:06.934646  163433 logs.go:274] 0 containers: []
	W0921 22:00:06.934655  163433 logs.go:276] No container was found matching "coredns"
	I0921 22:00:06.934663  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0921 22:00:06.934707  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0921 22:00:06.961709  163433 cri.go:87] found id: ""
	I0921 22:00:06.961739  163433 logs.go:274] 0 containers: []
	W0921 22:00:06.961749  163433 logs.go:276] No container was found matching "kube-scheduler"
	I0921 22:00:06.961756  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0921 22:00:06.961810  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0921 22:00:07.006619  163433 cri.go:87] found id: ""
	I0921 22:00:07.006647  163433 logs.go:274] 0 containers: []
	W0921 22:00:07.006656  163433 logs.go:276] No container was found matching "kube-proxy"
	I0921 22:00:07.006667  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0921 22:00:07.006722  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0921 22:00:07.041876  163433 cri.go:87] found id: ""
	I0921 22:00:07.041904  163433 logs.go:274] 0 containers: []
	W0921 22:00:07.041942  163433 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0921 22:00:07.041952  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0921 22:00:07.042006  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0921 22:00:07.115611  163433 cri.go:87] found id: ""
	I0921 22:00:07.115635  163433 logs.go:274] 0 containers: []
	W0921 22:00:07.115641  163433 logs.go:276] No container was found matching "storage-provisioner"
	I0921 22:00:07.115647  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0921 22:00:07.115688  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0921 22:00:07.217253  163433 cri.go:87] found id: ""
	I0921 22:00:07.217289  163433 logs.go:274] 0 containers: []
	W0921 22:00:07.217297  163433 logs.go:276] No container was found matching "kube-controller-manager"
	I0921 22:00:07.217309  163433 logs.go:123] Gathering logs for container status ...
	I0921 22:00:07.217323  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0921 22:00:07.246160  163433 logs.go:123] Gathering logs for kubelet ...
	I0921 22:00:07.246193  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0921 22:00:07.262860  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:17 kubernetes-upgrade-20220921215522-10174 kubelet[5082]: E0921 21:59:17.132884    5082 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:07.263258  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:17 kubernetes-upgrade-20220921215522-10174 kubelet[5093]: E0921 21:59:17.876596    5093 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:07.263645  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:18 kubernetes-upgrade-20220921215522-10174 kubelet[5104]: E0921 21:59:18.633938    5104 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:07.264094  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:19 kubernetes-upgrade-20220921215522-10174 kubelet[5115]: E0921 21:59:19.373556    5115 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:07.264506  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:20 kubernetes-upgrade-20220921215522-10174 kubelet[5125]: E0921 21:59:20.124415    5125 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:07.265029  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:20 kubernetes-upgrade-20220921215522-10174 kubelet[5136]: E0921 21:59:20.873303    5136 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:07.265473  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:21 kubernetes-upgrade-20220921215522-10174 kubelet[5147]: E0921 21:59:21.625541    5147 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:07.265972  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:22 kubernetes-upgrade-20220921215522-10174 kubelet[5158]: E0921 21:59:22.381102    5158 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:07.266351  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:23 kubernetes-upgrade-20220921215522-10174 kubelet[5170]: E0921 21:59:23.127688    5170 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:07.266797  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:23 kubernetes-upgrade-20220921215522-10174 kubelet[5181]: E0921 21:59:23.885323    5181 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:07.267203  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:24 kubernetes-upgrade-20220921215522-10174 kubelet[5194]: E0921 21:59:24.636411    5194 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:07.267587  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:25 kubernetes-upgrade-20220921215522-10174 kubelet[5337]: E0921 21:59:25.370164    5337 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:07.268012  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:26 kubernetes-upgrade-20220921215522-10174 kubelet[5348]: E0921 21:59:26.124915    5348 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:07.268398  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:26 kubernetes-upgrade-20220921215522-10174 kubelet[5359]: E0921 21:59:26.873809    5359 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:07.268788  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:27 kubernetes-upgrade-20220921215522-10174 kubelet[5370]: E0921 21:59:27.628142    5370 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:07.269208  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:28 kubernetes-upgrade-20220921215522-10174 kubelet[5382]: E0921 21:59:28.390222    5382 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:07.269596  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:29 kubernetes-upgrade-20220921215522-10174 kubelet[5393]: E0921 21:59:29.137158    5393 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:07.269980  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:29 kubernetes-upgrade-20220921215522-10174 kubelet[5402]: E0921 21:59:29.876549    5402 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:07.270373  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:30 kubernetes-upgrade-20220921215522-10174 kubelet[5412]: E0921 21:59:30.626109    5412 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:07.270746  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:31 kubernetes-upgrade-20220921215522-10174 kubelet[5423]: E0921 21:59:31.374600    5423 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:07.271126  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:32 kubernetes-upgrade-20220921215522-10174 kubelet[5435]: E0921 21:59:32.145394    5435 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:07.271517  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:32 kubernetes-upgrade-20220921215522-10174 kubelet[5446]: E0921 21:59:32.870591    5446 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:07.271916  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:33 kubernetes-upgrade-20220921215522-10174 kubelet[5457]: E0921 21:59:33.620542    5457 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:07.272296  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:34 kubernetes-upgrade-20220921215522-10174 kubelet[5468]: E0921 21:59:34.371200    5468 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:07.272689  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:35 kubernetes-upgrade-20220921215522-10174 kubelet[5479]: E0921 21:59:35.124735    5479 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:07.273082  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:35 kubernetes-upgrade-20220921215522-10174 kubelet[5630]: E0921 21:59:35.871827    5630 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:07.273499  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:36 kubernetes-upgrade-20220921215522-10174 kubelet[5641]: E0921 21:59:36.621845    5641 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:07.273874  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:37 kubernetes-upgrade-20220921215522-10174 kubelet[5652]: E0921 21:59:37.378142    5652 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:07.274252  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:38 kubernetes-upgrade-20220921215522-10174 kubelet[5663]: E0921 21:59:38.135567    5663 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:07.274643  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:38 kubernetes-upgrade-20220921215522-10174 kubelet[5674]: E0921 21:59:38.874212    5674 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:07.275020  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:39 kubernetes-upgrade-20220921215522-10174 kubelet[5685]: E0921 21:59:39.623075    5685 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:07.275399  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:40 kubernetes-upgrade-20220921215522-10174 kubelet[5696]: E0921 21:59:40.374289    5696 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:07.275932  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:41 kubernetes-upgrade-20220921215522-10174 kubelet[5708]: E0921 21:59:41.126179    5708 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:07.276591  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:41 kubernetes-upgrade-20220921215522-10174 kubelet[5719]: E0921 21:59:41.879591    5719 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:07.277278  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:42 kubernetes-upgrade-20220921215522-10174 kubelet[5729]: E0921 21:59:42.633853    5729 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:07.277949  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:43 kubernetes-upgrade-20220921215522-10174 kubelet[5740]: E0921 21:59:43.387451    5740 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:07.278598  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:44 kubernetes-upgrade-20220921215522-10174 kubelet[5752]: E0921 21:59:44.128455    5752 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:07.279294  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:44 kubernetes-upgrade-20220921215522-10174 kubelet[5762]: E0921 21:59:44.874407    5762 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:07.279997  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:45 kubernetes-upgrade-20220921215522-10174 kubelet[5774]: E0921 21:59:45.623741    5774 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:07.280655  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:46 kubernetes-upgrade-20220921215522-10174 kubelet[5921]: E0921 21:59:46.369661    5921 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:07.281361  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:47 kubernetes-upgrade-20220921215522-10174 kubelet[5932]: E0921 21:59:47.127018    5932 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:07.282053  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:47 kubernetes-upgrade-20220921215522-10174 kubelet[5943]: E0921 21:59:47.882291    5943 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:07.284751  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:48 kubernetes-upgrade-20220921215522-10174 kubelet[5954]: E0921 21:59:48.659376    5954 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:07.285448  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:49 kubernetes-upgrade-20220921215522-10174 kubelet[5966]: E0921 21:59:49.395999    5966 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:07.286160  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:50 kubernetes-upgrade-20220921215522-10174 kubelet[5977]: E0921 21:59:50.139088    5977 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:07.286961  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:50 kubernetes-upgrade-20220921215522-10174 kubelet[5987]: E0921 21:59:50.872398    5987 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:07.287922  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:51 kubernetes-upgrade-20220921215522-10174 kubelet[5999]: E0921 21:59:51.621693    5999 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:07.288632  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:52 kubernetes-upgrade-20220921215522-10174 kubelet[6010]: E0921 21:59:52.378257    6010 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:07.289352  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:53 kubernetes-upgrade-20220921215522-10174 kubelet[6021]: E0921 21:59:53.122030    6021 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:07.290144  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:53 kubernetes-upgrade-20220921215522-10174 kubelet[6032]: E0921 21:59:53.883474    6032 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:07.290821  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:54 kubernetes-upgrade-20220921215522-10174 kubelet[6043]: E0921 21:59:54.623656    6043 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:07.294823  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:55 kubernetes-upgrade-20220921215522-10174 kubelet[6053]: E0921 21:59:55.373880    6053 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:07.295499  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:56 kubernetes-upgrade-20220921215522-10174 kubelet[6064]: E0921 21:59:56.122849    6064 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:07.296201  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:56 kubernetes-upgrade-20220921215522-10174 kubelet[6212]: E0921 21:59:56.912660    6212 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:07.296909  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:57 kubernetes-upgrade-20220921215522-10174 kubelet[6223]: E0921 21:59:57.625738    6223 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:07.297570  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:58 kubernetes-upgrade-20220921215522-10174 kubelet[6233]: E0921 21:59:58.376248    6233 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:07.298245  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:59 kubernetes-upgrade-20220921215522-10174 kubelet[6244]: E0921 21:59:59.124640    6244 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:07.298958  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:59 kubernetes-upgrade-20220921215522-10174 kubelet[6255]: E0921 21:59:59.885711    6255 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:07.299673  163433 logs.go:138] Found kubelet problem: Sep 21 22:00:00 kubernetes-upgrade-20220921215522-10174 kubelet[6265]: E0921 22:00:00.628088    6265 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:07.305340  163433 logs.go:138] Found kubelet problem: Sep 21 22:00:01 kubernetes-upgrade-20220921215522-10174 kubelet[6275]: E0921 22:00:01.381353    6275 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:07.306041  163433 logs.go:138] Found kubelet problem: Sep 21 22:00:02 kubernetes-upgrade-20220921215522-10174 kubelet[6286]: E0921 22:00:02.136330    6286 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:07.306704  163433 logs.go:138] Found kubelet problem: Sep 21 22:00:02 kubernetes-upgrade-20220921215522-10174 kubelet[6296]: E0921 22:00:02.884102    6296 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:07.307360  163433 logs.go:138] Found kubelet problem: Sep 21 22:00:03 kubernetes-upgrade-20220921215522-10174 kubelet[6307]: E0921 22:00:03.631068    6307 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:07.308033  163433 logs.go:138] Found kubelet problem: Sep 21 22:00:04 kubernetes-upgrade-20220921215522-10174 kubelet[6317]: E0921 22:00:04.376091    6317 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:07.308686  163433 logs.go:138] Found kubelet problem: Sep 21 22:00:05 kubernetes-upgrade-20220921215522-10174 kubelet[6328]: E0921 22:00:05.124355    6328 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:07.309364  163433 logs.go:138] Found kubelet problem: Sep 21 22:00:05 kubernetes-upgrade-20220921215522-10174 kubelet[6339]: E0921 22:00:05.878766    6339 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:07.310002  163433 logs.go:138] Found kubelet problem: Sep 21 22:00:06 kubernetes-upgrade-20220921215522-10174 kubelet[6349]: E0921 22:00:06.623312    6349 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0921 22:00:07.310233  163433 logs.go:123] Gathering logs for dmesg ...
	I0921 22:00:07.310251  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0921 22:00:07.334736  163433 logs.go:123] Gathering logs for describe nodes ...
	I0921 22:00:07.334835  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0921 22:00:07.447250  163433 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0921 22:00:07.447293  163433 logs.go:123] Gathering logs for containerd ...
	I0921 22:00:07.447308  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0921 22:00:07.503668  163433 out.go:309] Setting ErrFile to fd 2...
	I0921 22:00:07.503704  163433 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0921 22:00:07.503936  163433 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0921 22:00:07.503973  163433 out.go:239]   Sep 21 22:00:03 kubernetes-upgrade-20220921215522-10174 kubelet[6307]: E0921 22:00:03.631068    6307 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Sep 21 22:00:03 kubernetes-upgrade-20220921215522-10174 kubelet[6307]: E0921 22:00:03.631068    6307 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:07.503981  163433 out.go:239]   Sep 21 22:00:04 kubernetes-upgrade-20220921215522-10174 kubelet[6317]: E0921 22:00:04.376091    6317 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Sep 21 22:00:04 kubernetes-upgrade-20220921215522-10174 kubelet[6317]: E0921 22:00:04.376091    6317 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:07.503991  163433 out.go:239]   Sep 21 22:00:05 kubernetes-upgrade-20220921215522-10174 kubelet[6328]: E0921 22:00:05.124355    6328 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Sep 21 22:00:05 kubernetes-upgrade-20220921215522-10174 kubelet[6328]: E0921 22:00:05.124355    6328 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:07.504001  163433 out.go:239]   Sep 21 22:00:05 kubernetes-upgrade-20220921215522-10174 kubelet[6339]: E0921 22:00:05.878766    6339 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Sep 21 22:00:05 kubernetes-upgrade-20220921215522-10174 kubelet[6339]: E0921 22:00:05.878766    6339 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:07.504009  163433 out.go:239]   Sep 21 22:00:06 kubernetes-upgrade-20220921215522-10174 kubelet[6349]: E0921 22:00:06.623312    6349 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Sep 21 22:00:06 kubernetes-upgrade-20220921215522-10174 kubelet[6349]: E0921 22:00:06.623312    6349 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0921 22:00:07.504079  163433 out.go:309] Setting ErrFile to fd 2...
	I0921 22:00:07.504095  163433 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:00:17.506037  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 22:00:17.862999  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0921 22:00:17.863091  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0921 22:00:17.890706  163433 cri.go:87] found id: ""
	I0921 22:00:17.890735  163433 logs.go:274] 0 containers: []
	W0921 22:00:17.890752  163433 logs.go:276] No container was found matching "kube-apiserver"
	I0921 22:00:17.890759  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0921 22:00:17.890818  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0921 22:00:17.919086  163433 cri.go:87] found id: ""
	I0921 22:00:17.919119  163433 logs.go:274] 0 containers: []
	W0921 22:00:17.919127  163433 logs.go:276] No container was found matching "etcd"
	I0921 22:00:17.919136  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0921 22:00:17.919189  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0921 22:00:17.943931  163433 cri.go:87] found id: ""
	I0921 22:00:17.943966  163433 logs.go:274] 0 containers: []
	W0921 22:00:17.943975  163433 logs.go:276] No container was found matching "coredns"
	I0921 22:00:17.943982  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0921 22:00:17.944037  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0921 22:00:17.970927  163433 cri.go:87] found id: ""
	I0921 22:00:17.970954  163433 logs.go:274] 0 containers: []
	W0921 22:00:17.970961  163433 logs.go:276] No container was found matching "kube-scheduler"
	I0921 22:00:17.970966  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0921 22:00:17.971044  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0921 22:00:17.997937  163433 cri.go:87] found id: ""
	I0921 22:00:17.997960  163433 logs.go:274] 0 containers: []
	W0921 22:00:17.997967  163433 logs.go:276] No container was found matching "kube-proxy"
	I0921 22:00:17.997972  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0921 22:00:17.998027  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0921 22:00:18.025139  163433 cri.go:87] found id: ""
	I0921 22:00:18.025167  163433 logs.go:274] 0 containers: []
	W0921 22:00:18.025175  163433 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0921 22:00:18.025183  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0921 22:00:18.025253  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0921 22:00:18.057790  163433 cri.go:87] found id: ""
	I0921 22:00:18.057815  163433 logs.go:274] 0 containers: []
	W0921 22:00:18.057821  163433 logs.go:276] No container was found matching "storage-provisioner"
	I0921 22:00:18.057827  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0921 22:00:18.057878  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0921 22:00:18.086006  163433 cri.go:87] found id: ""
	I0921 22:00:18.086037  163433 logs.go:274] 0 containers: []
	W0921 22:00:18.086046  163433 logs.go:276] No container was found matching "kube-controller-manager"
	I0921 22:00:18.086059  163433 logs.go:123] Gathering logs for kubelet ...
	I0921 22:00:18.086073  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0921 22:00:18.104469  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:28 kubernetes-upgrade-20220921215522-10174 kubelet[5382]: E0921 21:59:28.390222    5382 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:18.104889  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:29 kubernetes-upgrade-20220921215522-10174 kubelet[5393]: E0921 21:59:29.137158    5393 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:18.105311  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:29 kubernetes-upgrade-20220921215522-10174 kubelet[5402]: E0921 21:59:29.876549    5402 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:18.105879  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:30 kubernetes-upgrade-20220921215522-10174 kubelet[5412]: E0921 21:59:30.626109    5412 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:18.106541  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:31 kubernetes-upgrade-20220921215522-10174 kubelet[5423]: E0921 21:59:31.374600    5423 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:18.107212  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:32 kubernetes-upgrade-20220921215522-10174 kubelet[5435]: E0921 21:59:32.145394    5435 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:18.107920  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:32 kubernetes-upgrade-20220921215522-10174 kubelet[5446]: E0921 21:59:32.870591    5446 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:18.108433  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:33 kubernetes-upgrade-20220921215522-10174 kubelet[5457]: E0921 21:59:33.620542    5457 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:18.109004  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:34 kubernetes-upgrade-20220921215522-10174 kubelet[5468]: E0921 21:59:34.371200    5468 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:18.109557  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:35 kubernetes-upgrade-20220921215522-10174 kubelet[5479]: E0921 21:59:35.124735    5479 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:18.109978  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:35 kubernetes-upgrade-20220921215522-10174 kubelet[5630]: E0921 21:59:35.871827    5630 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:18.110373  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:36 kubernetes-upgrade-20220921215522-10174 kubelet[5641]: E0921 21:59:36.621845    5641 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:18.110933  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:37 kubernetes-upgrade-20220921215522-10174 kubelet[5652]: E0921 21:59:37.378142    5652 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:18.111637  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:38 kubernetes-upgrade-20220921215522-10174 kubelet[5663]: E0921 21:59:38.135567    5663 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:18.112285  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:38 kubernetes-upgrade-20220921215522-10174 kubelet[5674]: E0921 21:59:38.874212    5674 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:18.112735  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:39 kubernetes-upgrade-20220921215522-10174 kubelet[5685]: E0921 21:59:39.623075    5685 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:18.113111  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:40 kubernetes-upgrade-20220921215522-10174 kubelet[5696]: E0921 21:59:40.374289    5696 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:18.113571  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:41 kubernetes-upgrade-20220921215522-10174 kubelet[5708]: E0921 21:59:41.126179    5708 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:18.114242  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:41 kubernetes-upgrade-20220921215522-10174 kubelet[5719]: E0921 21:59:41.879591    5719 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:18.114945  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:42 kubernetes-upgrade-20220921215522-10174 kubelet[5729]: E0921 21:59:42.633853    5729 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:18.115599  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:43 kubernetes-upgrade-20220921215522-10174 kubelet[5740]: E0921 21:59:43.387451    5740 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:18.116289  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:44 kubernetes-upgrade-20220921215522-10174 kubelet[5752]: E0921 21:59:44.128455    5752 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:18.116930  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:44 kubernetes-upgrade-20220921215522-10174 kubelet[5762]: E0921 21:59:44.874407    5762 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:18.117607  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:45 kubernetes-upgrade-20220921215522-10174 kubelet[5774]: E0921 21:59:45.623741    5774 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:18.118211  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:46 kubernetes-upgrade-20220921215522-10174 kubelet[5921]: E0921 21:59:46.369661    5921 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:18.118699  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:47 kubernetes-upgrade-20220921215522-10174 kubelet[5932]: E0921 21:59:47.127018    5932 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:18.119196  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:47 kubernetes-upgrade-20220921215522-10174 kubelet[5943]: E0921 21:59:47.882291    5943 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:18.119653  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:48 kubernetes-upgrade-20220921215522-10174 kubelet[5954]: E0921 21:59:48.659376    5954 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:18.120146  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:49 kubernetes-upgrade-20220921215522-10174 kubelet[5966]: E0921 21:59:49.395999    5966 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:18.120691  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:50 kubernetes-upgrade-20220921215522-10174 kubelet[5977]: E0921 21:59:50.139088    5977 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:18.121407  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:50 kubernetes-upgrade-20220921215522-10174 kubelet[5987]: E0921 21:59:50.872398    5987 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:18.122061  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:51 kubernetes-upgrade-20220921215522-10174 kubelet[5999]: E0921 21:59:51.621693    5999 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:18.122739  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:52 kubernetes-upgrade-20220921215522-10174 kubelet[6010]: E0921 21:59:52.378257    6010 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:18.123391  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:53 kubernetes-upgrade-20220921215522-10174 kubelet[6021]: E0921 21:59:53.122030    6021 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:18.124080  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:53 kubernetes-upgrade-20220921215522-10174 kubelet[6032]: E0921 21:59:53.883474    6032 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:18.124802  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:54 kubernetes-upgrade-20220921215522-10174 kubelet[6043]: E0921 21:59:54.623656    6043 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:18.125512  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:55 kubernetes-upgrade-20220921215522-10174 kubelet[6053]: E0921 21:59:55.373880    6053 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:18.126217  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:56 kubernetes-upgrade-20220921215522-10174 kubelet[6064]: E0921 21:59:56.122849    6064 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:18.126789  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:56 kubernetes-upgrade-20220921215522-10174 kubelet[6212]: E0921 21:59:56.912660    6212 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:18.127191  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:57 kubernetes-upgrade-20220921215522-10174 kubelet[6223]: E0921 21:59:57.625738    6223 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:18.127786  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:58 kubernetes-upgrade-20220921215522-10174 kubelet[6233]: E0921 21:59:58.376248    6233 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:18.128284  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:59 kubernetes-upgrade-20220921215522-10174 kubelet[6244]: E0921 21:59:59.124640    6244 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:18.128751  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:59 kubernetes-upgrade-20220921215522-10174 kubelet[6255]: E0921 21:59:59.885711    6255 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:18.129343  163433 logs.go:138] Found kubelet problem: Sep 21 22:00:00 kubernetes-upgrade-20220921215522-10174 kubelet[6265]: E0921 22:00:00.628088    6265 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:18.129915  163433 logs.go:138] Found kubelet problem: Sep 21 22:00:01 kubernetes-upgrade-20220921215522-10174 kubelet[6275]: E0921 22:00:01.381353    6275 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:18.130316  163433 logs.go:138] Found kubelet problem: Sep 21 22:00:02 kubernetes-upgrade-20220921215522-10174 kubelet[6286]: E0921 22:00:02.136330    6286 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:18.130910  163433 logs.go:138] Found kubelet problem: Sep 21 22:00:02 kubernetes-upgrade-20220921215522-10174 kubelet[6296]: E0921 22:00:02.884102    6296 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:18.131557  163433 logs.go:138] Found kubelet problem: Sep 21 22:00:03 kubernetes-upgrade-20220921215522-10174 kubelet[6307]: E0921 22:00:03.631068    6307 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:18.132039  163433 logs.go:138] Found kubelet problem: Sep 21 22:00:04 kubernetes-upgrade-20220921215522-10174 kubelet[6317]: E0921 22:00:04.376091    6317 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:18.132443  163433 logs.go:138] Found kubelet problem: Sep 21 22:00:05 kubernetes-upgrade-20220921215522-10174 kubelet[6328]: E0921 22:00:05.124355    6328 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:18.132860  163433 logs.go:138] Found kubelet problem: Sep 21 22:00:05 kubernetes-upgrade-20220921215522-10174 kubelet[6339]: E0921 22:00:05.878766    6339 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:18.133252  163433 logs.go:138] Found kubelet problem: Sep 21 22:00:06 kubernetes-upgrade-20220921215522-10174 kubelet[6349]: E0921 22:00:06.623312    6349 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:18.133634  163433 logs.go:138] Found kubelet problem: Sep 21 22:00:07 kubernetes-upgrade-20220921215522-10174 kubelet[6479]: E0921 22:00:07.413586    6479 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:18.134039  163433 logs.go:138] Found kubelet problem: Sep 21 22:00:08 kubernetes-upgrade-20220921215522-10174 kubelet[6498]: E0921 22:00:08.203915    6498 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:18.134422  163433 logs.go:138] Found kubelet problem: Sep 21 22:00:08 kubernetes-upgrade-20220921215522-10174 kubelet[6506]: E0921 22:00:08.896024    6506 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:18.134801  163433 logs.go:138] Found kubelet problem: Sep 21 22:00:09 kubernetes-upgrade-20220921215522-10174 kubelet[6517]: E0921 22:00:09.631187    6517 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:18.135191  163433 logs.go:138] Found kubelet problem: Sep 21 22:00:10 kubernetes-upgrade-20220921215522-10174 kubelet[6528]: E0921 22:00:10.378072    6528 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:18.135583  163433 logs.go:138] Found kubelet problem: Sep 21 22:00:11 kubernetes-upgrade-20220921215522-10174 kubelet[6537]: E0921 22:00:11.154422    6537 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:18.135993  163433 logs.go:138] Found kubelet problem: Sep 21 22:00:11 kubernetes-upgrade-20220921215522-10174 kubelet[6548]: E0921 22:00:11.883832    6548 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:18.136387  163433 logs.go:138] Found kubelet problem: Sep 21 22:00:12 kubernetes-upgrade-20220921215522-10174 kubelet[6558]: E0921 22:00:12.650188    6558 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:18.136762  163433 logs.go:138] Found kubelet problem: Sep 21 22:00:13 kubernetes-upgrade-20220921215522-10174 kubelet[6569]: E0921 22:00:13.393024    6569 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:18.137146  163433 logs.go:138] Found kubelet problem: Sep 21 22:00:14 kubernetes-upgrade-20220921215522-10174 kubelet[6579]: E0921 22:00:14.131211    6579 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:18.137557  163433 logs.go:138] Found kubelet problem: Sep 21 22:00:14 kubernetes-upgrade-20220921215522-10174 kubelet[6589]: E0921 22:00:14.887145    6589 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:18.137949  163433 logs.go:138] Found kubelet problem: Sep 21 22:00:15 kubernetes-upgrade-20220921215522-10174 kubelet[6601]: E0921 22:00:15.671116    6601 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:18.138338  163433 logs.go:138] Found kubelet problem: Sep 21 22:00:16 kubernetes-upgrade-20220921215522-10174 kubelet[6612]: E0921 22:00:16.392213    6612 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:18.138719  163433 logs.go:138] Found kubelet problem: Sep 21 22:00:17 kubernetes-upgrade-20220921215522-10174 kubelet[6621]: E0921 22:00:17.141076    6621 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:18.139103  163433 logs.go:138] Found kubelet problem: Sep 21 22:00:17 kubernetes-upgrade-20220921215522-10174 kubelet[6634]: E0921 22:00:17.877141    6634 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0921 22:00:18.139248  163433 logs.go:123] Gathering logs for dmesg ...
	I0921 22:00:18.139308  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0921 22:00:18.154371  163433 logs.go:123] Gathering logs for describe nodes ...
	I0921 22:00:18.154399  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0921 22:00:18.223787  163433 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0921 22:00:18.223812  163433 logs.go:123] Gathering logs for containerd ...
	I0921 22:00:18.223822  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0921 22:00:18.259233  163433 logs.go:123] Gathering logs for container status ...
	I0921 22:00:18.259265  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0921 22:00:18.296290  163433 out.go:309] Setting ErrFile to fd 2...
	I0921 22:00:18.296316  163433 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0921 22:00:18.296434  163433 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0921 22:00:18.296447  163433 out.go:239]   Sep 21 22:00:14 kubernetes-upgrade-20220921215522-10174 kubelet[6589]: E0921 22:00:14.887145    6589 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Sep 21 22:00:14 kubernetes-upgrade-20220921215522-10174 kubelet[6589]: E0921 22:00:14.887145    6589 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:18.296455  163433 out.go:239]   Sep 21 22:00:15 kubernetes-upgrade-20220921215522-10174 kubelet[6601]: E0921 22:00:15.671116    6601 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Sep 21 22:00:15 kubernetes-upgrade-20220921215522-10174 kubelet[6601]: E0921 22:00:15.671116    6601 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:18.296464  163433 out.go:239]   Sep 21 22:00:16 kubernetes-upgrade-20220921215522-10174 kubelet[6612]: E0921 22:00:16.392213    6612 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Sep 21 22:00:16 kubernetes-upgrade-20220921215522-10174 kubelet[6612]: E0921 22:00:16.392213    6612 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:18.296474  163433 out.go:239]   Sep 21 22:00:17 kubernetes-upgrade-20220921215522-10174 kubelet[6621]: E0921 22:00:17.141076    6621 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Sep 21 22:00:17 kubernetes-upgrade-20220921215522-10174 kubelet[6621]: E0921 22:00:17.141076    6621 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:18.296487  163433 out.go:239]   Sep 21 22:00:17 kubernetes-upgrade-20220921215522-10174 kubelet[6634]: E0921 22:00:17.877141    6634 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Sep 21 22:00:17 kubernetes-upgrade-20220921215522-10174 kubelet[6634]: E0921 22:00:17.877141    6634 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0921 22:00:18.296493  163433 out.go:309] Setting ErrFile to fd 2...
	I0921 22:00:18.296505  163433 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:00:28.297531  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 22:00:28.363064  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0921 22:00:28.363176  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0921 22:00:28.393968  163433 cri.go:87] found id: ""
	I0921 22:00:28.393997  163433 logs.go:274] 0 containers: []
	W0921 22:00:28.394006  163433 logs.go:276] No container was found matching "kube-apiserver"
	I0921 22:00:28.394015  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0921 22:00:28.394077  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0921 22:00:28.425199  163433 cri.go:87] found id: ""
	I0921 22:00:28.425228  163433 logs.go:274] 0 containers: []
	W0921 22:00:28.425236  163433 logs.go:276] No container was found matching "etcd"
	I0921 22:00:28.425244  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0921 22:00:28.425305  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0921 22:00:28.458022  163433 cri.go:87] found id: ""
	I0921 22:00:28.458058  163433 logs.go:274] 0 containers: []
	W0921 22:00:28.458069  163433 logs.go:276] No container was found matching "coredns"
	I0921 22:00:28.458077  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0921 22:00:28.458141  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0921 22:00:28.484520  163433 cri.go:87] found id: ""
	I0921 22:00:28.484560  163433 logs.go:274] 0 containers: []
	W0921 22:00:28.484569  163433 logs.go:276] No container was found matching "kube-scheduler"
	I0921 22:00:28.484577  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0921 22:00:28.484642  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0921 22:00:28.510468  163433 cri.go:87] found id: ""
	I0921 22:00:28.510498  163433 logs.go:274] 0 containers: []
	W0921 22:00:28.510508  163433 logs.go:276] No container was found matching "kube-proxy"
	I0921 22:00:28.510516  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0921 22:00:28.510564  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0921 22:00:28.535911  163433 cri.go:87] found id: ""
	I0921 22:00:28.535934  163433 logs.go:274] 0 containers: []
	W0921 22:00:28.535940  163433 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0921 22:00:28.535946  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0921 22:00:28.535997  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0921 22:00:28.573959  163433 cri.go:87] found id: ""
	I0921 22:00:28.573988  163433 logs.go:274] 0 containers: []
	W0921 22:00:28.573995  163433 logs.go:276] No container was found matching "storage-provisioner"
	I0921 22:00:28.574001  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0921 22:00:28.574044  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0921 22:00:28.607709  163433 cri.go:87] found id: ""
	I0921 22:00:28.607756  163433 logs.go:274] 0 containers: []
	W0921 22:00:28.607763  163433 logs.go:276] No container was found matching "kube-controller-manager"
	I0921 22:00:28.607773  163433 logs.go:123] Gathering logs for kubelet ...
	I0921 22:00:28.607784  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0921 22:00:28.624882  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:38 kubernetes-upgrade-20220921215522-10174 kubelet[5674]: E0921 21:59:38.874212    5674 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:28.625602  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:39 kubernetes-upgrade-20220921215522-10174 kubelet[5685]: E0921 21:59:39.623075    5685 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:28.626333  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:40 kubernetes-upgrade-20220921215522-10174 kubelet[5696]: E0921 21:59:40.374289    5696 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:28.627034  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:41 kubernetes-upgrade-20220921215522-10174 kubelet[5708]: E0921 21:59:41.126179    5708 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:28.627769  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:41 kubernetes-upgrade-20220921215522-10174 kubelet[5719]: E0921 21:59:41.879591    5719 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:28.628496  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:42 kubernetes-upgrade-20220921215522-10174 kubelet[5729]: E0921 21:59:42.633853    5729 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:28.629220  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:43 kubernetes-upgrade-20220921215522-10174 kubelet[5740]: E0921 21:59:43.387451    5740 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:28.629933  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:44 kubernetes-upgrade-20220921215522-10174 kubelet[5752]: E0921 21:59:44.128455    5752 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:28.630646  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:44 kubernetes-upgrade-20220921215522-10174 kubelet[5762]: E0921 21:59:44.874407    5762 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:28.631358  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:45 kubernetes-upgrade-20220921215522-10174 kubelet[5774]: E0921 21:59:45.623741    5774 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:28.632109  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:46 kubernetes-upgrade-20220921215522-10174 kubelet[5921]: E0921 21:59:46.369661    5921 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:28.632826  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:47 kubernetes-upgrade-20220921215522-10174 kubelet[5932]: E0921 21:59:47.127018    5932 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:28.633525  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:47 kubernetes-upgrade-20220921215522-10174 kubelet[5943]: E0921 21:59:47.882291    5943 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:28.634234  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:48 kubernetes-upgrade-20220921215522-10174 kubelet[5954]: E0921 21:59:48.659376    5954 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:28.634960  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:49 kubernetes-upgrade-20220921215522-10174 kubelet[5966]: E0921 21:59:49.395999    5966 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:28.635661  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:50 kubernetes-upgrade-20220921215522-10174 kubelet[5977]: E0921 21:59:50.139088    5977 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:28.636379  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:50 kubernetes-upgrade-20220921215522-10174 kubelet[5987]: E0921 21:59:50.872398    5987 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:28.637093  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:51 kubernetes-upgrade-20220921215522-10174 kubelet[5999]: E0921 21:59:51.621693    5999 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:28.637798  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:52 kubernetes-upgrade-20220921215522-10174 kubelet[6010]: E0921 21:59:52.378257    6010 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:28.638504  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:53 kubernetes-upgrade-20220921215522-10174 kubelet[6021]: E0921 21:59:53.122030    6021 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:28.639234  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:53 kubernetes-upgrade-20220921215522-10174 kubelet[6032]: E0921 21:59:53.883474    6032 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:28.639957  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:54 kubernetes-upgrade-20220921215522-10174 kubelet[6043]: E0921 21:59:54.623656    6043 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:28.640663  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:55 kubernetes-upgrade-20220921215522-10174 kubelet[6053]: E0921 21:59:55.373880    6053 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:28.641375  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:56 kubernetes-upgrade-20220921215522-10174 kubelet[6064]: E0921 21:59:56.122849    6064 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:28.642089  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:56 kubernetes-upgrade-20220921215522-10174 kubelet[6212]: E0921 21:59:56.912660    6212 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:28.642802  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:57 kubernetes-upgrade-20220921215522-10174 kubelet[6223]: E0921 21:59:57.625738    6223 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:28.643510  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:58 kubernetes-upgrade-20220921215522-10174 kubelet[6233]: E0921 21:59:58.376248    6233 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:28.644235  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:59 kubernetes-upgrade-20220921215522-10174 kubelet[6244]: E0921 21:59:59.124640    6244 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:28.644946  163433 logs.go:138] Found kubelet problem: Sep 21 21:59:59 kubernetes-upgrade-20220921215522-10174 kubelet[6255]: E0921 21:59:59.885711    6255 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:28.645654  163433 logs.go:138] Found kubelet problem: Sep 21 22:00:00 kubernetes-upgrade-20220921215522-10174 kubelet[6265]: E0921 22:00:00.628088    6265 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:28.646365  163433 logs.go:138] Found kubelet problem: Sep 21 22:00:01 kubernetes-upgrade-20220921215522-10174 kubelet[6275]: E0921 22:00:01.381353    6275 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:28.647081  163433 logs.go:138] Found kubelet problem: Sep 21 22:00:02 kubernetes-upgrade-20220921215522-10174 kubelet[6286]: E0921 22:00:02.136330    6286 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:28.647797  163433 logs.go:138] Found kubelet problem: Sep 21 22:00:02 kubernetes-upgrade-20220921215522-10174 kubelet[6296]: E0921 22:00:02.884102    6296 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:28.648511  163433 logs.go:138] Found kubelet problem: Sep 21 22:00:03 kubernetes-upgrade-20220921215522-10174 kubelet[6307]: E0921 22:00:03.631068    6307 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:28.649237  163433 logs.go:138] Found kubelet problem: Sep 21 22:00:04 kubernetes-upgrade-20220921215522-10174 kubelet[6317]: E0921 22:00:04.376091    6317 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:28.649959  163433 logs.go:138] Found kubelet problem: Sep 21 22:00:05 kubernetes-upgrade-20220921215522-10174 kubelet[6328]: E0921 22:00:05.124355    6328 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:28.650666  163433 logs.go:138] Found kubelet problem: Sep 21 22:00:05 kubernetes-upgrade-20220921215522-10174 kubelet[6339]: E0921 22:00:05.878766    6339 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:28.651377  163433 logs.go:138] Found kubelet problem: Sep 21 22:00:06 kubernetes-upgrade-20220921215522-10174 kubelet[6349]: E0921 22:00:06.623312    6349 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:28.652101  163433 logs.go:138] Found kubelet problem: Sep 21 22:00:07 kubernetes-upgrade-20220921215522-10174 kubelet[6479]: E0921 22:00:07.413586    6479 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:28.652815  163433 logs.go:138] Found kubelet problem: Sep 21 22:00:08 kubernetes-upgrade-20220921215522-10174 kubelet[6498]: E0921 22:00:08.203915    6498 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:28.653518  163433 logs.go:138] Found kubelet problem: Sep 21 22:00:08 kubernetes-upgrade-20220921215522-10174 kubelet[6506]: E0921 22:00:08.896024    6506 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:28.654229  163433 logs.go:138] Found kubelet problem: Sep 21 22:00:09 kubernetes-upgrade-20220921215522-10174 kubelet[6517]: E0921 22:00:09.631187    6517 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:28.654943  163433 logs.go:138] Found kubelet problem: Sep 21 22:00:10 kubernetes-upgrade-20220921215522-10174 kubelet[6528]: E0921 22:00:10.378072    6528 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:28.655651  163433 logs.go:138] Found kubelet problem: Sep 21 22:00:11 kubernetes-upgrade-20220921215522-10174 kubelet[6537]: E0921 22:00:11.154422    6537 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:28.656373  163433 logs.go:138] Found kubelet problem: Sep 21 22:00:11 kubernetes-upgrade-20220921215522-10174 kubelet[6548]: E0921 22:00:11.883832    6548 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:28.657089  163433 logs.go:138] Found kubelet problem: Sep 21 22:00:12 kubernetes-upgrade-20220921215522-10174 kubelet[6558]: E0921 22:00:12.650188    6558 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:28.657804  163433 logs.go:138] Found kubelet problem: Sep 21 22:00:13 kubernetes-upgrade-20220921215522-10174 kubelet[6569]: E0921 22:00:13.393024    6569 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:28.658512  163433 logs.go:138] Found kubelet problem: Sep 21 22:00:14 kubernetes-upgrade-20220921215522-10174 kubelet[6579]: E0921 22:00:14.131211    6579 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:28.659226  163433 logs.go:138] Found kubelet problem: Sep 21 22:00:14 kubernetes-upgrade-20220921215522-10174 kubelet[6589]: E0921 22:00:14.887145    6589 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:28.659962  163433 logs.go:138] Found kubelet problem: Sep 21 22:00:15 kubernetes-upgrade-20220921215522-10174 kubelet[6601]: E0921 22:00:15.671116    6601 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:28.660686  163433 logs.go:138] Found kubelet problem: Sep 21 22:00:16 kubernetes-upgrade-20220921215522-10174 kubelet[6612]: E0921 22:00:16.392213    6612 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:28.661399  163433 logs.go:138] Found kubelet problem: Sep 21 22:00:17 kubernetes-upgrade-20220921215522-10174 kubelet[6621]: E0921 22:00:17.141076    6621 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:28.662117  163433 logs.go:138] Found kubelet problem: Sep 21 22:00:17 kubernetes-upgrade-20220921215522-10174 kubelet[6634]: E0921 22:00:17.877141    6634 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:28.662824  163433 logs.go:138] Found kubelet problem: Sep 21 22:00:18 kubernetes-upgrade-20220921215522-10174 kubelet[6780]: E0921 22:00:18.628375    6780 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:28.663557  163433 logs.go:138] Found kubelet problem: Sep 21 22:00:19 kubernetes-upgrade-20220921215522-10174 kubelet[6791]: E0921 22:00:19.403500    6791 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:28.664274  163433 logs.go:138] Found kubelet problem: Sep 21 22:00:20 kubernetes-upgrade-20220921215522-10174 kubelet[6802]: E0921 22:00:20.126094    6802 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:28.664984  163433 logs.go:138] Found kubelet problem: Sep 21 22:00:20 kubernetes-upgrade-20220921215522-10174 kubelet[6813]: E0921 22:00:20.880801    6813 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:28.665689  163433 logs.go:138] Found kubelet problem: Sep 21 22:00:21 kubernetes-upgrade-20220921215522-10174 kubelet[6825]: E0921 22:00:21.625219    6825 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:28.666401  163433 logs.go:138] Found kubelet problem: Sep 21 22:00:22 kubernetes-upgrade-20220921215522-10174 kubelet[6836]: E0921 22:00:22.375925    6836 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:28.667115  163433 logs.go:138] Found kubelet problem: Sep 21 22:00:23 kubernetes-upgrade-20220921215522-10174 kubelet[6847]: E0921 22:00:23.123479    6847 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:28.667838  163433 logs.go:138] Found kubelet problem: Sep 21 22:00:23 kubernetes-upgrade-20220921215522-10174 kubelet[6858]: E0921 22:00:23.879137    6858 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:28.668549  163433 logs.go:138] Found kubelet problem: Sep 21 22:00:24 kubernetes-upgrade-20220921215522-10174 kubelet[6869]: E0921 22:00:24.632766    6869 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:28.669270  163433 logs.go:138] Found kubelet problem: Sep 21 22:00:25 kubernetes-upgrade-20220921215522-10174 kubelet[6880]: E0921 22:00:25.374278    6880 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:28.669991  163433 logs.go:138] Found kubelet problem: Sep 21 22:00:26 kubernetes-upgrade-20220921215522-10174 kubelet[6891]: E0921 22:00:26.138502    6891 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:28.670701  163433 logs.go:138] Found kubelet problem: Sep 21 22:00:26 kubernetes-upgrade-20220921215522-10174 kubelet[6901]: E0921 22:00:26.871756    6901 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:28.671420  163433 logs.go:138] Found kubelet problem: Sep 21 22:00:27 kubernetes-upgrade-20220921215522-10174 kubelet[6913]: E0921 22:00:27.648560    6913 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:28.672138  163433 logs.go:138] Found kubelet problem: Sep 21 22:00:28 kubernetes-upgrade-20220921215522-10174 kubelet[6925]: E0921 22:00:28.391197    6925 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0921 22:00:28.672374  163433 logs.go:123] Gathering logs for dmesg ...
	I0921 22:00:28.672396  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0921 22:00:28.688119  163433 logs.go:123] Gathering logs for describe nodes ...
	I0921 22:00:28.688154  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0921 22:00:28.755946  163433 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0921 22:00:28.755970  163433 logs.go:123] Gathering logs for containerd ...
	I0921 22:00:28.755980  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0921 22:00:28.795608  163433 logs.go:123] Gathering logs for container status ...
	I0921 22:00:28.795641  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0921 22:00:28.822712  163433 out.go:309] Setting ErrFile to fd 2...
	I0921 22:00:28.822737  163433 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0921 22:00:28.822861  163433 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0921 22:00:28.822878  163433 out.go:239]   Sep 21 22:00:25 kubernetes-upgrade-20220921215522-10174 kubelet[6880]: E0921 22:00:25.374278    6880 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Sep 21 22:00:25 kubernetes-upgrade-20220921215522-10174 kubelet[6880]: E0921 22:00:25.374278    6880 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:28.822885  163433 out.go:239]   Sep 21 22:00:26 kubernetes-upgrade-20220921215522-10174 kubelet[6891]: E0921 22:00:26.138502    6891 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Sep 21 22:00:26 kubernetes-upgrade-20220921215522-10174 kubelet[6891]: E0921 22:00:26.138502    6891 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:28.822892  163433 out.go:239]   Sep 21 22:00:26 kubernetes-upgrade-20220921215522-10174 kubelet[6901]: E0921 22:00:26.871756    6901 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Sep 21 22:00:26 kubernetes-upgrade-20220921215522-10174 kubelet[6901]: E0921 22:00:26.871756    6901 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:28.822901  163433 out.go:239]   Sep 21 22:00:27 kubernetes-upgrade-20220921215522-10174 kubelet[6913]: E0921 22:00:27.648560    6913 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Sep 21 22:00:27 kubernetes-upgrade-20220921215522-10174 kubelet[6913]: E0921 22:00:27.648560    6913 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:00:28.822907  163433 out.go:239]   Sep 21 22:00:28 kubernetes-upgrade-20220921215522-10174 kubelet[6925]: E0921 22:00:28.391197    6925 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	  Sep 21 22:00:28 kubernetes-upgrade-20220921215522-10174 kubelet[6925]: E0921 22:00:28.391197    6925 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0921 22:00:28.822913  163433 out.go:309] Setting ErrFile to fd 2...
	I0921 22:00:28.822919  163433 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:00:38.823268  163433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 22:00:38.863544  163433 kubeadm.go:631] restartCluster took 4m1.12814734s
	W0921 22:00:38.863699  163433 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0921 22:00:38.863769  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0921 22:00:40.855466  163433 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.991670203s)
	I0921 22:00:40.855539  163433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0921 22:00:40.866648  163433 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0921 22:00:40.873760  163433 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0921 22:00:40.873823  163433 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0921 22:00:40.881453  163433 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0921 22:00:40.881506  163433 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0921 22:00:40.924774  163433 kubeadm.go:317] W0921 22:00:40.923939    8214 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0921 22:00:40.961370  163433 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1017-gcp\n", err: exit status 1
	I0921 22:00:41.029465  163433 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0921 22:02:37.018351  163433 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0921 22:02:37.018524  163433 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	I0921 22:02:37.021344  163433 kubeadm.go:317] [init] Using Kubernetes version: v1.25.2
	I0921 22:02:37.021412  163433 kubeadm.go:317] [preflight] Running pre-flight checks
	I0921 22:02:37.021521  163433 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I0921 22:02:37.021617  163433 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1017-gcp
	I0921 22:02:37.021677  163433 kubeadm.go:317] OS: Linux
	I0921 22:02:37.021750  163433 kubeadm.go:317] CGROUPS_CPU: enabled
	I0921 22:02:37.021830  163433 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I0921 22:02:37.021902  163433 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I0921 22:02:37.021973  163433 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I0921 22:02:37.022047  163433 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I0921 22:02:37.022121  163433 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I0921 22:02:37.022188  163433 kubeadm.go:317] CGROUPS_PIDS: enabled
	I0921 22:02:37.022253  163433 kubeadm.go:317] CGROUPS_HUGETLB: enabled
	I0921 22:02:37.022318  163433 kubeadm.go:317] CGROUPS_BLKIO: enabled
	I0921 22:02:37.022421  163433 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0921 22:02:37.022566  163433 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0921 22:02:37.022728  163433 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0921 22:02:37.022848  163433 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0921 22:02:37.025283  163433 out.go:204]   - Generating certificates and keys ...
	I0921 22:02:37.025385  163433 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0921 22:02:37.025475  163433 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0921 22:02:37.025582  163433 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0921 22:02:37.025662  163433 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I0921 22:02:37.025742  163433 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I0921 22:02:37.025826  163433 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I0921 22:02:37.025908  163433 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I0921 22:02:37.025970  163433 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I0921 22:02:37.026043  163433 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0921 22:02:37.026115  163433 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0921 22:02:37.026150  163433 kubeadm.go:317] [certs] Using the existing "sa" key
	I0921 22:02:37.026236  163433 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0921 22:02:37.026314  163433 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0921 22:02:37.026388  163433 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0921 22:02:37.026472  163433 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0921 22:02:37.026545  163433 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0921 22:02:37.026684  163433 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0921 22:02:37.026792  163433 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0921 22:02:37.026839  163433 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I0921 22:02:37.026895  163433 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0921 22:02:37.028542  163433 out.go:204]   - Booting up control plane ...
	I0921 22:02:37.028642  163433 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0921 22:02:37.028731  163433 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0921 22:02:37.028833  163433 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0921 22:02:37.028952  163433 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0921 22:02:37.029138  163433 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0921 22:02:37.029208  163433 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
	I0921 22:02:37.029298  163433 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0921 22:02:37.029557  163433 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0921 22:02:37.029641  163433 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0921 22:02:37.029848  163433 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0921 22:02:37.029934  163433 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0921 22:02:37.030147  163433 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0921 22:02:37.030218  163433 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0921 22:02:37.030380  163433 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0921 22:02:37.030466  163433 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0921 22:02:37.030646  163433 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0921 22:02:37.030656  163433 kubeadm.go:317] 
	I0921 22:02:37.030710  163433 kubeadm.go:317] Unfortunately, an error has occurred:
	I0921 22:02:37.030783  163433 kubeadm.go:317] 	timed out waiting for the condition
	I0921 22:02:37.030793  163433 kubeadm.go:317] 
	I0921 22:02:37.030849  163433 kubeadm.go:317] This error is likely caused by:
	I0921 22:02:37.030900  163433 kubeadm.go:317] 	- The kubelet is not running
	I0921 22:02:37.031052  163433 kubeadm.go:317] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0921 22:02:37.031070  163433 kubeadm.go:317] 
	I0921 22:02:37.031191  163433 kubeadm.go:317] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0921 22:02:37.031243  163433 kubeadm.go:317] 	- 'systemctl status kubelet'
	I0921 22:02:37.031297  163433 kubeadm.go:317] 	- 'journalctl -xeu kubelet'
	I0921 22:02:37.031308  163433 kubeadm.go:317] 
	I0921 22:02:37.031422  163433 kubeadm.go:317] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0921 22:02:37.031534  163433 kubeadm.go:317] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0921 22:02:37.031667  163433 kubeadm.go:317] Here is one example how you may list all running Kubernetes containers by using crictl:
	I0921 22:02:37.031810  163433 kubeadm.go:317] 	- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
	I0921 22:02:37.031929  163433 kubeadm.go:317] 	Once you have found the failing container, you can inspect its logs with:
	I0921 22:02:37.032051  163433 kubeadm.go:317] 	- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	W0921 22:02:37.032372  163433 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.25.2
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1017-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W0921 22:00:40.923939    8214 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1017-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.25.2
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1017-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W0921 22:00:40.923939    8214 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1017-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0921 22:02:37.032427  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0921 22:02:38.854856  163433 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.822402127s)
	I0921 22:02:38.854917  163433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0921 22:02:38.864392  163433 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0921 22:02:38.864449  163433 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0921 22:02:38.871143  163433 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0921 22:02:38.871179  163433 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0921 22:02:38.909279  163433 kubeadm.go:317] [init] Using Kubernetes version: v1.25.2
	I0921 22:02:38.909365  163433 kubeadm.go:317] [preflight] Running pre-flight checks
	I0921 22:02:38.936932  163433 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I0921 22:02:38.937031  163433 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1017-gcp
	I0921 22:02:38.937063  163433 kubeadm.go:317] OS: Linux
	I0921 22:02:38.937103  163433 kubeadm.go:317] CGROUPS_CPU: enabled
	I0921 22:02:38.937150  163433 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I0921 22:02:38.937239  163433 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I0921 22:02:38.937318  163433 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I0921 22:02:38.937390  163433 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I0921 22:02:38.937481  163433 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I0921 22:02:38.937544  163433 kubeadm.go:317] CGROUPS_PIDS: enabled
	I0921 22:02:38.937595  163433 kubeadm.go:317] CGROUPS_HUGETLB: enabled
	I0921 22:02:38.937662  163433 kubeadm.go:317] CGROUPS_BLKIO: enabled
	I0921 22:02:39.000511  163433 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0921 22:02:39.000640  163433 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0921 22:02:39.000796  163433 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0921 22:02:39.116829  163433 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0921 22:02:39.119686  163433 out.go:204]   - Generating certificates and keys ...
	I0921 22:02:39.119846  163433 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0921 22:02:39.119930  163433 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0921 22:02:39.120020  163433 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0921 22:02:39.120098  163433 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I0921 22:02:39.120195  163433 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I0921 22:02:39.120267  163433 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I0921 22:02:39.120352  163433 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I0921 22:02:39.120437  163433 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I0921 22:02:39.120536  163433 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0921 22:02:39.120627  163433 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0921 22:02:39.120686  163433 kubeadm.go:317] [certs] Using the existing "sa" key
	I0921 22:02:39.120764  163433 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0921 22:02:39.214860  163433 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0921 22:02:39.365850  163433 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0921 22:02:39.700601  163433 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0921 22:02:40.053396  163433 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0921 22:02:40.065917  163433 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0921 22:02:40.066776  163433 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0921 22:02:40.066842  163433 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I0921 22:02:40.151802  163433 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0921 22:02:40.154174  163433 out.go:204]   - Booting up control plane ...
	I0921 22:02:40.154312  163433 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0921 22:02:40.154475  163433 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0921 22:02:40.155428  163433 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0921 22:02:40.156385  163433 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0921 22:02:40.159229  163433 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0921 22:03:20.159558  163433 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
	I0921 22:03:20.159897  163433 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0921 22:03:20.160149  163433 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0921 22:03:25.160511  163433 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0921 22:03:25.160744  163433 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0921 22:03:35.160837  163433 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0921 22:03:35.161107  163433 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0921 22:03:55.162112  163433 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0921 22:03:55.162307  163433 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0921 22:04:35.163184  163433 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0921 22:04:35.163483  163433 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0921 22:04:35.163518  163433 kubeadm.go:317] 
	I0921 22:04:35.163584  163433 kubeadm.go:317] Unfortunately, an error has occurred:
	I0921 22:04:35.163648  163433 kubeadm.go:317] 	timed out waiting for the condition
	I0921 22:04:35.163661  163433 kubeadm.go:317] 
	I0921 22:04:35.163710  163433 kubeadm.go:317] This error is likely caused by:
	I0921 22:04:35.163796  163433 kubeadm.go:317] 	- The kubelet is not running
	I0921 22:04:35.163928  163433 kubeadm.go:317] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0921 22:04:35.163937  163433 kubeadm.go:317] 
	I0921 22:04:35.164060  163433 kubeadm.go:317] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0921 22:04:35.164118  163433 kubeadm.go:317] 	- 'systemctl status kubelet'
	I0921 22:04:35.164157  163433 kubeadm.go:317] 	- 'journalctl -xeu kubelet'
	I0921 22:04:35.164173  163433 kubeadm.go:317] 
	I0921 22:04:35.164320  163433 kubeadm.go:317] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0921 22:04:35.164436  163433 kubeadm.go:317] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0921 22:04:35.164551  163433 kubeadm.go:317] Here is one example how you may list all running Kubernetes containers by using crictl:
	I0921 22:04:35.164687  163433 kubeadm.go:317] 	- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
	I0921 22:04:35.164801  163433 kubeadm.go:317] 	Once you have found the failing container, you can inspect its logs with:
	I0921 22:04:35.164918  163433 kubeadm.go:317] 	- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	I0921 22:04:35.166104  163433 kubeadm.go:317] W0921 22:02:38.904023   11075 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0921 22:04:35.166318  163433 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1017-gcp\n", err: exit status 1
	I0921 22:04:35.166415  163433 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0921 22:04:35.166492  163433 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0921 22:04:35.166592  163433 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	I0921 22:04:35.166696  163433 kubeadm.go:398] StartCluster complete in 7m57.461301325s
	I0921 22:04:35.166738  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0921 22:04:35.166788  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0921 22:04:35.191228  163433 cri.go:87] found id: ""
	I0921 22:04:35.191254  163433 logs.go:274] 0 containers: []
	W0921 22:04:35.191261  163433 logs.go:276] No container was found matching "kube-apiserver"
	I0921 22:04:35.191272  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0921 22:04:35.191332  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0921 22:04:35.214715  163433 cri.go:87] found id: ""
	I0921 22:04:35.214744  163433 logs.go:274] 0 containers: []
	W0921 22:04:35.214750  163433 logs.go:276] No container was found matching "etcd"
	I0921 22:04:35.214756  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0921 22:04:35.214804  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0921 22:04:35.239162  163433 cri.go:87] found id: ""
	I0921 22:04:35.239186  163433 logs.go:274] 0 containers: []
	W0921 22:04:35.239192  163433 logs.go:276] No container was found matching "coredns"
	I0921 22:04:35.239197  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0921 22:04:35.239252  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0921 22:04:35.263333  163433 cri.go:87] found id: ""
	I0921 22:04:35.263357  163433 logs.go:274] 0 containers: []
	W0921 22:04:35.263363  163433 logs.go:276] No container was found matching "kube-scheduler"
	I0921 22:04:35.263368  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0921 22:04:35.263407  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0921 22:04:35.286321  163433 cri.go:87] found id: ""
	I0921 22:04:35.286347  163433 logs.go:274] 0 containers: []
	W0921 22:04:35.286355  163433 logs.go:276] No container was found matching "kube-proxy"
	I0921 22:04:35.286364  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0921 22:04:35.286426  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0921 22:04:35.314686  163433 cri.go:87] found id: ""
	I0921 22:04:35.314714  163433 logs.go:274] 0 containers: []
	W0921 22:04:35.314722  163433 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0921 22:04:35.314730  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0921 22:04:35.314783  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0921 22:04:35.338110  163433 cri.go:87] found id: ""
	I0921 22:04:35.338142  163433 logs.go:274] 0 containers: []
	W0921 22:04:35.338151  163433 logs.go:276] No container was found matching "storage-provisioner"
	I0921 22:04:35.338160  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0921 22:04:35.338240  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0921 22:04:35.363135  163433 cri.go:87] found id: ""
	I0921 22:04:35.363167  163433 logs.go:274] 0 containers: []
	W0921 22:04:35.363176  163433 logs.go:276] No container was found matching "kube-controller-manager"
	I0921 22:04:35.363188  163433 logs.go:123] Gathering logs for kubelet ...
	I0921 22:04:35.363200  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0921 22:04:35.379811  163433 logs.go:138] Found kubelet problem: Sep 21 22:03:45 kubernetes-upgrade-20220921215522-10174 kubelet[12183]: E0921 22:03:45.374524   12183 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.380296  163433 logs.go:138] Found kubelet problem: Sep 21 22:03:46 kubernetes-upgrade-20220921215522-10174 kubelet[12194]: E0921 22:03:46.123389   12194 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.380707  163433 logs.go:138] Found kubelet problem: Sep 21 22:03:46 kubernetes-upgrade-20220921215522-10174 kubelet[12205]: E0921 22:03:46.873433   12205 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.381121  163433 logs.go:138] Found kubelet problem: Sep 21 22:03:47 kubernetes-upgrade-20220921215522-10174 kubelet[12216]: E0921 22:03:47.622295   12216 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.381546  163433 logs.go:138] Found kubelet problem: Sep 21 22:03:48 kubernetes-upgrade-20220921215522-10174 kubelet[12227]: E0921 22:03:48.372753   12227 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.381922  163433 logs.go:138] Found kubelet problem: Sep 21 22:03:49 kubernetes-upgrade-20220921215522-10174 kubelet[12238]: E0921 22:03:49.121733   12238 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.382304  163433 logs.go:138] Found kubelet problem: Sep 21 22:03:49 kubernetes-upgrade-20220921215522-10174 kubelet[12249]: E0921 22:03:49.874411   12249 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.382688  163433 logs.go:138] Found kubelet problem: Sep 21 22:03:50 kubernetes-upgrade-20220921215522-10174 kubelet[12260]: E0921 22:03:50.621954   12260 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.383079  163433 logs.go:138] Found kubelet problem: Sep 21 22:03:51 kubernetes-upgrade-20220921215522-10174 kubelet[12271]: E0921 22:03:51.373297   12271 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.383463  163433 logs.go:138] Found kubelet problem: Sep 21 22:03:52 kubernetes-upgrade-20220921215522-10174 kubelet[12282]: E0921 22:03:52.124808   12282 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.383909  163433 logs.go:138] Found kubelet problem: Sep 21 22:03:52 kubernetes-upgrade-20220921215522-10174 kubelet[12293]: E0921 22:03:52.872464   12293 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.384310  163433 logs.go:138] Found kubelet problem: Sep 21 22:03:53 kubernetes-upgrade-20220921215522-10174 kubelet[12304]: E0921 22:03:53.621146   12304 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.384694  163433 logs.go:138] Found kubelet problem: Sep 21 22:03:54 kubernetes-upgrade-20220921215522-10174 kubelet[12315]: E0921 22:03:54.372737   12315 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.385094  163433 logs.go:138] Found kubelet problem: Sep 21 22:03:55 kubernetes-upgrade-20220921215522-10174 kubelet[12326]: E0921 22:03:55.122043   12326 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.385483  163433 logs.go:138] Found kubelet problem: Sep 21 22:03:55 kubernetes-upgrade-20220921215522-10174 kubelet[12337]: E0921 22:03:55.873129   12337 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.385896  163433 logs.go:138] Found kubelet problem: Sep 21 22:03:56 kubernetes-upgrade-20220921215522-10174 kubelet[12349]: E0921 22:03:56.622479   12349 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.386331  163433 logs.go:138] Found kubelet problem: Sep 21 22:03:57 kubernetes-upgrade-20220921215522-10174 kubelet[12360]: E0921 22:03:57.373797   12360 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.386778  163433 logs.go:138] Found kubelet problem: Sep 21 22:03:58 kubernetes-upgrade-20220921215522-10174 kubelet[12371]: E0921 22:03:58.123173   12371 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.387176  163433 logs.go:138] Found kubelet problem: Sep 21 22:03:58 kubernetes-upgrade-20220921215522-10174 kubelet[12383]: E0921 22:03:58.872655   12383 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.387549  163433 logs.go:138] Found kubelet problem: Sep 21 22:03:59 kubernetes-upgrade-20220921215522-10174 kubelet[12394]: E0921 22:03:59.624915   12394 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.387984  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:00 kubernetes-upgrade-20220921215522-10174 kubelet[12405]: E0921 22:04:00.373541   12405 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.388383  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:01 kubernetes-upgrade-20220921215522-10174 kubelet[12416]: E0921 22:04:01.125035   12416 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.388766  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:01 kubernetes-upgrade-20220921215522-10174 kubelet[12428]: E0921 22:04:01.874353   12428 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.389209  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:02 kubernetes-upgrade-20220921215522-10174 kubelet[12439]: E0921 22:04:02.624609   12439 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.389617  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:03 kubernetes-upgrade-20220921215522-10174 kubelet[12450]: E0921 22:04:03.374384   12450 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.390085  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:04 kubernetes-upgrade-20220921215522-10174 kubelet[12461]: E0921 22:04:04.121121   12461 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.390489  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:04 kubernetes-upgrade-20220921215522-10174 kubelet[12471]: E0921 22:04:04.874129   12471 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.390875  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:05 kubernetes-upgrade-20220921215522-10174 kubelet[12482]: E0921 22:04:05.622119   12482 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.391301  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:06 kubernetes-upgrade-20220921215522-10174 kubelet[12494]: E0921 22:04:06.373783   12494 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.391702  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:07 kubernetes-upgrade-20220921215522-10174 kubelet[12504]: E0921 22:04:07.122518   12504 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.392128  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:07 kubernetes-upgrade-20220921215522-10174 kubelet[12516]: E0921 22:04:07.873670   12516 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.392780  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:08 kubernetes-upgrade-20220921215522-10174 kubelet[12527]: E0921 22:04:08.622374   12527 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.393441  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:09 kubernetes-upgrade-20220921215522-10174 kubelet[12539]: E0921 22:04:09.380221   12539 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.393896  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:10 kubernetes-upgrade-20220921215522-10174 kubelet[12549]: E0921 22:04:10.123024   12549 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.394289  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:10 kubernetes-upgrade-20220921215522-10174 kubelet[12560]: E0921 22:04:10.871810   12560 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.394689  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:11 kubernetes-upgrade-20220921215522-10174 kubelet[12570]: E0921 22:04:11.623184   12570 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.395075  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:12 kubernetes-upgrade-20220921215522-10174 kubelet[12581]: E0921 22:04:12.373639   12581 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.395484  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:13 kubernetes-upgrade-20220921215522-10174 kubelet[12592]: E0921 22:04:13.122993   12592 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.395916  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:13 kubernetes-upgrade-20220921215522-10174 kubelet[12602]: E0921 22:04:13.874559   12602 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.396306  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:14 kubernetes-upgrade-20220921215522-10174 kubelet[12612]: E0921 22:04:14.623058   12612 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.396713  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:15 kubernetes-upgrade-20220921215522-10174 kubelet[12623]: E0921 22:04:15.372885   12623 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.397101  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:16 kubernetes-upgrade-20220921215522-10174 kubelet[12633]: E0921 22:04:16.123144   12633 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.397495  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:16 kubernetes-upgrade-20220921215522-10174 kubelet[12644]: E0921 22:04:16.872120   12644 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.397886  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:17 kubernetes-upgrade-20220921215522-10174 kubelet[12655]: E0921 22:04:17.621957   12655 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.398268  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:18 kubernetes-upgrade-20220921215522-10174 kubelet[12666]: E0921 22:04:18.372467   12666 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.398694  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:19 kubernetes-upgrade-20220921215522-10174 kubelet[12676]: E0921 22:04:19.122753   12676 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.399092  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:19 kubernetes-upgrade-20220921215522-10174 kubelet[12687]: E0921 22:04:19.876067   12687 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.399483  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:20 kubernetes-upgrade-20220921215522-10174 kubelet[12698]: E0921 22:04:20.623954   12698 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.399925  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:21 kubernetes-upgrade-20220921215522-10174 kubelet[12709]: E0921 22:04:21.373994   12709 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.400440  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:22 kubernetes-upgrade-20220921215522-10174 kubelet[12720]: E0921 22:04:22.124039   12720 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.400894  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:22 kubernetes-upgrade-20220921215522-10174 kubelet[12730]: E0921 22:04:22.872700   12730 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.401275  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:23 kubernetes-upgrade-20220921215522-10174 kubelet[12741]: E0921 22:04:23.620191   12741 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.401664  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:24 kubernetes-upgrade-20220921215522-10174 kubelet[12752]: E0921 22:04:24.373902   12752 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.402051  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:25 kubernetes-upgrade-20220921215522-10174 kubelet[12763]: E0921 22:04:25.123958   12763 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.402430  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:25 kubernetes-upgrade-20220921215522-10174 kubelet[12773]: E0921 22:04:25.874249   12773 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.402818  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:26 kubernetes-upgrade-20220921215522-10174 kubelet[12786]: E0921 22:04:26.623403   12786 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.403198  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:27 kubernetes-upgrade-20220921215522-10174 kubelet[12797]: E0921 22:04:27.375008   12797 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.403573  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:28 kubernetes-upgrade-20220921215522-10174 kubelet[12808]: E0921 22:04:28.122905   12808 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.403988  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:28 kubernetes-upgrade-20220921215522-10174 kubelet[12819]: E0921 22:04:28.871801   12819 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.404363  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:29 kubernetes-upgrade-20220921215522-10174 kubelet[12830]: E0921 22:04:29.622004   12830 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.404759  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:30 kubernetes-upgrade-20220921215522-10174 kubelet[12842]: E0921 22:04:30.373730   12842 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.405157  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:31 kubernetes-upgrade-20220921215522-10174 kubelet[12854]: E0921 22:04:31.121895   12854 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.405540  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:31 kubernetes-upgrade-20220921215522-10174 kubelet[12865]: E0921 22:04:31.872824   12865 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.405921  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:32 kubernetes-upgrade-20220921215522-10174 kubelet[12876]: E0921 22:04:32.622571   12876 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.406306  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:33 kubernetes-upgrade-20220921215522-10174 kubelet[12887]: E0921 22:04:33.373474   12887 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.406697  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:34 kubernetes-upgrade-20220921215522-10174 kubelet[12898]: E0921 22:04:34.121999   12898 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.407098  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:34 kubernetes-upgrade-20220921215522-10174 kubelet[12909]: E0921 22:04:34.878168   12909 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0921 22:04:35.407226  163433 logs.go:123] Gathering logs for dmesg ...
	I0921 22:04:35.407241  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0921 22:04:35.428361  163433 logs.go:123] Gathering logs for describe nodes ...
	I0921 22:04:35.428401  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0921 22:04:35.484445  163433 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0921 22:04:35.484475  163433 logs.go:123] Gathering logs for containerd ...
	I0921 22:04:35.484488  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0921 22:04:35.545769  163433 logs.go:123] Gathering logs for container status ...
	I0921 22:04:35.545802  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0921 22:04:35.571634  163433 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.25.2
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1017-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W0921 22:02:38.904023   11075 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1017-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0921 22:04:35.571680  163433 out.go:239] * 
	* 
	W0921 22:04:35.571931  163433 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.25.2
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1017-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W0921 22:02:38.904023   11075 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1017-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.25.2
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1017-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W0921 22:02:38.904023   11075 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1017-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0921 22:04:35.571967  163433 out.go:239] * 
	* 
	W0921 22:04:35.572762  163433 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0921 22:04:35.575608  163433 out.go:177] X Problems detected in kubelet:
	I0921 22:04:35.576957  163433 out.go:177]   Sep 21 22:03:45 kubernetes-upgrade-20220921215522-10174 kubelet[12183]: E0921 22:03:45.374524   12183 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0921 22:04:35.578291  163433 out.go:177]   Sep 21 22:03:46 kubernetes-upgrade-20220921215522-10174 kubelet[12194]: E0921 22:03:46.123389   12194 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0921 22:04:35.579626  163433 out.go:177]   Sep 21 22:03:46 kubernetes-upgrade-20220921215522-10174 kubelet[12205]: E0921 22:03:46.873433   12205 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0921 22:04:35.583058  163433 out.go:177] 
	W0921 22:04:35.584539  163433 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.25.2
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1017-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W0921 22:02:38.904023   11075 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1017-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.25.2
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1017-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W0921 22:02:38.904023   11075 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1017-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0921 22:04:35.584658  163433 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0921 22:04:35.584719  163433 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0921 22:04:35.586989  163433 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:252: failed to upgrade with newest k8s version. args: out/minikube-linux-amd64 start -p kubernetes-upgrade-20220921215522-10174 --memory=2200 --kubernetes-version=v1.25.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd : exit status 109
version_upgrade_test.go:255: (dbg) Run:  kubectl --context kubernetes-upgrade-20220921215522-10174 version --output=json
version_upgrade_test.go:255: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-20220921215522-10174 version --output=json: exit status 1 (49.824513ms)

                                                
                                                
-- stdout --
	{
	  "clientVersion": {
	    "major": "1",
	    "minor": "25",
	    "gitVersion": "v1.25.2",
	    "gitCommit": "5835544ca568b757a8ecae5c153f317e5736700e",
	    "gitTreeState": "clean",
	    "buildDate": "2022-09-21T14:33:49Z",
	    "goVersion": "go1.19.1",
	    "compiler": "gc",
	    "platform": "linux/amd64"
	  },
	  "kustomizeVersion": "v4.5.7"
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.67.2:8443 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
version_upgrade_test.go:257: error running kubectl: exit status 1
panic.go:522: *** TestKubernetesUpgrade FAILED at 2022-09-21 22:04:36.011661338 +0000 UTC m=+2244.150113346
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect kubernetes-upgrade-20220921215522-10174
helpers_test.go:235: (dbg) docker inspect kubernetes-upgrade-20220921215522-10174:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "78e6429f54d6491f63320f4836b927743b052e04b60ab2936b115556d1f08623",
	        "Created": "2022-09-21T21:55:31.510769348Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 163812,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-09-21T21:56:11.026038243Z",
	            "FinishedAt": "2022-09-21T21:56:09.280743853Z"
	        },
	        "Image": "sha256:5f58fddaff4349397c9f51a6b73926a9b118af22b4ccb4492e84c74d0b59dcd4",
	        "ResolvConfPath": "/var/lib/docker/containers/78e6429f54d6491f63320f4836b927743b052e04b60ab2936b115556d1f08623/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/78e6429f54d6491f63320f4836b927743b052e04b60ab2936b115556d1f08623/hostname",
	        "HostsPath": "/var/lib/docker/containers/78e6429f54d6491f63320f4836b927743b052e04b60ab2936b115556d1f08623/hosts",
	        "LogPath": "/var/lib/docker/containers/78e6429f54d6491f63320f4836b927743b052e04b60ab2936b115556d1f08623/78e6429f54d6491f63320f4836b927743b052e04b60ab2936b115556d1f08623-json.log",
	        "Name": "/kubernetes-upgrade-20220921215522-10174",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-20220921215522-10174:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-20220921215522-10174",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/996ea718c980edd62bcd4bc973111203322953381af43c6088f7819e8d6b15b2-init/diff:/var/lib/docker/overlay2/e464f9a636cb97832b8aa5685163afb26b14d037782877c3831e0f261b6fb12b/diff:/var/lib/docker/overlay2/6f2f230003f0711dfcff3390931e14f9abd39be1cd2b674079cb6a0fc99dab7f/diff:/var/lib/docker/overlay2/a539506727a1471a83b57694b6923a137699d9d47fe00601ffa27c34d63d17c2/diff:/var/lib/docker/overlay2/7dc48fda616494e9959c05191521a1f5bdf6be0555a35fc1d7ce85b87a0522ad/diff:/var/lib/docker/overlay2/6a1c06da85f8c03c9712693ca8a75d804e7b9f135a6b92e3929385cc791b0b3a/diff:/var/lib/docker/overlay2/d178ee6cf0c027c7943422dc71b38eb93ee131daede423f6cf283424eaebc517/diff:/var/lib/docker/overlay2/f4c8d81a89934a29db99d9ed4b4542a8882465ab2b6419ba4a3fe09214d25d9e/diff:/var/lib/docker/overlay2/4411575e37772974371f716b7c4abb3053138ddcaf53d883eb3116b4c3a37f78/diff:/var/lib/docker/overlay2/a3f8db0c0d790efcede2f41e470d293e280b022ff0bbb46c8bc25f190e3146fc/diff:/var/lib/docker/overlay2/a5bfd3
703378d6d5b2622598a13b4b2eb9203659c1ffff9aa93944ef8d20d22c/diff:/var/lib/docker/overlay2/3c24c9c8a37c1a488d12209df991b3857ccbc448a3329e46a8d48fc28ef81b83/diff:/var/lib/docker/overlay2/199771963b33bc809e912a6d428c556897f0ba04db2268e08695cb7ce0ee3bad/diff:/var/lib/docker/overlay2/4bc13570e456a5d261fbaf68140f2e5b2ea10c6683623858e16ee3ed1b117ef7/diff:/var/lib/docker/overlay2/9831fdfd44eff7e3f4780d10f5480f090b388735bb188e7a1e93251b7769d8f3/diff:/var/lib/docker/overlay2/28126fe31ddfbf99d4588c5ac750503b9b6cee8f2e00781941cff5f4323d1c76/diff:/var/lib/docker/overlay2/173221e8ff5f6ec22870429dc8bb01272ade638fe47275d1daa1ce07952c8c8c/diff:/var/lib/docker/overlay2/53bc2e4fa25c7c694765cf4d57e59552b26eed732ba59b8d74b221ea0ada7479/diff:/var/lib/docker/overlay2/0175a1cb7e1c92247b6cbac270da49d811d3508e193c1c2e47bef8cb769a47bf/diff:/var/lib/docker/overlay2/1651afeb98cf83af553f3e0a4dcb564af84cb440147702fddc0133cdae08e879/diff:/var/lib/docker/overlay2/64c0563be8f8902fe0e9d207f839be98da0e23d6d839403a7e5498f0468e6b75/diff:/var/lib/d
ocker/overlay2/c9ce1f7c22e681e0e279ce7656144b4c75e2d24f7297acad69e621f2756b75e8/diff:/var/lib/docker/overlay2/469e6b8bbc078383af3dc64c254906774ab56a833c4de1dd39b46bfa27fee3b0/diff:/var/lib/docker/overlay2/797536b923ada4261904843a51188063c401089eae5fec971f1dfb3c21d000a8/diff:/var/lib/docker/overlay2/9d5cbab97d75347795fc5814806362514e12ffc998b48411ecf1d23f980badf6/diff:/var/lib/docker/overlay2/8681f41e3df379cfdc8fd52b2e5837512b8e36a64c87fb159548a876d16e691d/diff:/var/lib/docker/overlay2/78ea1917a0aca968f65d796cbc2ee95d7d190a700496b9e47b29884fe13b1bec/diff:/var/lib/docker/overlay2/c3c231a74b7992f1fdc04b623c10d7791eab4fd97c803ed2610d595097c682c2/diff:/var/lib/docker/overlay2/f7b38a2a87d3958318f3a4b244e33d8096cd001a8fcb916eec022b0679382900/diff:/var/lib/docker/overlay2/1107be3397c1dfc460decaf307982d427cc431beb7938fb0e78f4f7cbc0afd3b/diff:/var/lib/docker/overlay2/59319d0c4cc381deff6e51ba1cc718b7cfd4fc557e74b8e83bd228afca501c8c/diff:/var/lib/docker/overlay2/9d031408a1ab9825246b7bba81db2596cb92e70fbedc70fba9be1d25320
8529f/diff:/var/lib/docker/overlay2/5800b717c814431baf3cc17520b099e7e011915cdeb46ed6360ec2f831a926ec/diff:/var/lib/docker/overlay2/6660d8dfc6dbb7f41f871ff7ec7e0fdfdb2ca3480141cdd4cce92869cb5382bf/diff:/var/lib/docker/overlay2/525a566b79110ff18e78880f2652b8d9cfcfbdbf3272fedb650933fcbbf869bd/diff:/var/lib/docker/overlay2/a0730aa8e0952fc661048d3a7f986ff246a5b2a29ad4e8195970bb407e3cecfc/diff:/var/lib/docker/overlay2/f9d25765ae914f5f013b5f46292f03c39d82f675d2102904f5fabecf07189337/diff:/var/lib/docker/overlay2/b7b34d126964ff60bc0fb625da7343e53e8bb4e77d5e72e33257c28b3d395ca3/diff:/var/lib/docker/overlay2/8514d6cc3775ebdd86f683a78503419bc97346c0059ac0d48563d2edec9727f0/diff:/var/lib/docker/overlay2/dd6f7bead5718d42040b4ee90ce09fa15720d04f3c58e2964b1d5b241bf5b7ed/diff:/var/lib/docker/overlay2/1a8adeb0355a42e9284e241b527930f4f6b7108d459c10550d5d0c4451f9e924/diff:/var/lib/docker/overlay2/703991449e0070d93945e4435da56144ce54e461e877e0b084144663d46b10f6/diff:/var/lib/docker/overlay2/65855711cfa445d374537e6268fd1bea2f62ea
bf2f590842933254b4755050dd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/996ea718c980edd62bcd4bc973111203322953381af43c6088f7819e8d6b15b2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/996ea718c980edd62bcd4bc973111203322953381af43c6088f7819e8d6b15b2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/996ea718c980edd62bcd4bc973111203322953381af43c6088f7819e8d6b15b2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-20220921215522-10174",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-20220921215522-10174/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-20220921215522-10174",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-20220921215522-10174",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-20220921215522-10174",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "920af20b7ee11cedb3f2be56164b424fcbd75f33ad214de4ee14caa52403027b",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49344"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49343"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49340"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49342"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49341"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/920af20b7ee1",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-20220921215522-10174": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "78e6429f54d6",
	                        "kubernetes-upgrade-20220921215522-10174"
	                    ],
	                    "NetworkID": "454b31bb712a5040405e0ca56980290573f47fd969e65a7a2ee9d7f640b1457f",
	                    "EndpointID": "764a8970c8218b00af2b0dad6965e745ec88b4a1dc8504ec47f82dfb39c1cbe9",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-20220921215522-10174 -n kubernetes-upgrade-20220921215522-10174
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-20220921215522-10174 -n kubernetes-upgrade-20220921215522-10174: exit status 2 (363.359067ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-20220921215522-10174 logs -n 25
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------|-----------------------------------------|---------|---------|---------------------|---------------------|
	| Command |                  Args                   |                 Profile                 |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------|-----------------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | cert-options-20220921215754-10174       | cert-options-20220921215754-10174       | jenkins | v1.27.0 | 21 Sep 22 21:58 UTC | 21 Sep 22 21:58 UTC |
	|         | ssh openssl x509 -text -noout -in       |                                         |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt   |                                         |         |         |                     |                     |
	| ssh     | -p                                      | cert-options-20220921215754-10174       | jenkins | v1.27.0 | 21 Sep 22 21:58 UTC | 21 Sep 22 21:58 UTC |
	|         | cert-options-20220921215754-10174       |                                         |         |         |                     |                     |
	|         | -- sudo cat                             |                                         |         |         |                     |                     |
	|         | /etc/kubernetes/admin.conf              |                                         |         |         |                     |                     |
	| delete  | -p                                      | cert-options-20220921215754-10174       | jenkins | v1.27.0 | 21 Sep 22 21:58 UTC | 21 Sep 22 21:58 UTC |
	|         | cert-options-20220921215754-10174       |                                         |         |         |                     |                     |
	| start   | -p pause-20220921215721-10174           | pause-20220921215721-10174              | jenkins | v1.27.0 | 21 Sep 22 21:58 UTC | 21 Sep 22 21:58 UTC |
	|         | --alsologtostderr                       |                                         |         |         |                     |                     |
	|         | -v=1 --driver=docker                    |                                         |         |         |                     |                     |
	|         | --container-runtime=containerd          |                                         |         |         |                     |                     |
	| start   | -p auto-20220921215523-10174            | auto-20220921215523-10174               | jenkins | v1.27.0 | 21 Sep 22 21:58 UTC | 21 Sep 22 21:59 UTC |
	|         | --memory=2048                           |                                         |         |         |                     |                     |
	|         | --alsologtostderr                       |                                         |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m           |                                         |         |         |                     |                     |
	|         | --driver=docker                         |                                         |         |         |                     |                     |
	|         | --container-runtime=containerd          |                                         |         |         |                     |                     |
	| pause   | -p pause-20220921215721-10174           | pause-20220921215721-10174              | jenkins | v1.27.0 | 21 Sep 22 21:58 UTC | 21 Sep 22 21:58 UTC |
	|         | --alsologtostderr -v=5                  |                                         |         |         |                     |                     |
	| unpause | -p pause-20220921215721-10174           | pause-20220921215721-10174              | jenkins | v1.27.0 | 21 Sep 22 21:58 UTC | 21 Sep 22 21:58 UTC |
	|         | --alsologtostderr -v=5                  |                                         |         |         |                     |                     |
	| pause   | -p pause-20220921215721-10174           | pause-20220921215721-10174              | jenkins | v1.27.0 | 21 Sep 22 21:58 UTC | 21 Sep 22 21:58 UTC |
	|         | --alsologtostderr -v=5                  |                                         |         |         |                     |                     |
	| delete  | -p pause-20220921215721-10174           | pause-20220921215721-10174              | jenkins | v1.27.0 | 21 Sep 22 21:58 UTC | 21 Sep 22 21:58 UTC |
	|         | --alsologtostderr -v=5                  |                                         |         |         |                     |                     |
	| profile | list --output json                      | minikube                                | jenkins | v1.27.0 | 21 Sep 22 21:58 UTC | 21 Sep 22 21:58 UTC |
	| delete  | -p pause-20220921215721-10174           | pause-20220921215721-10174              | jenkins | v1.27.0 | 21 Sep 22 21:58 UTC | 21 Sep 22 21:58 UTC |
	| start   | -p                                      | kindnet-20220921215523-10174            | jenkins | v1.27.0 | 21 Sep 22 21:58 UTC | 21 Sep 22 21:59 UTC |
	|         | kindnet-20220921215523-10174            |                                         |         |         |                     |                     |
	|         | --memory=2048                           |                                         |         |         |                     |                     |
	|         | --alsologtostderr                       |                                         |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m           |                                         |         |         |                     |                     |
	|         | --cni=kindnet --driver=docker           |                                         |         |         |                     |                     |
	|         | --container-runtime=containerd          |                                         |         |         |                     |                     |
	| start   | -p                                      | cert-expiration-20220921215524-10174    | jenkins | v1.27.0 | 21 Sep 22 21:58 UTC | 21 Sep 22 21:59 UTC |
	|         | cert-expiration-20220921215524-10174    |                                         |         |         |                     |                     |
	|         | --memory=2048                           |                                         |         |         |                     |                     |
	|         | --cert-expiration=8760h                 |                                         |         |         |                     |                     |
	|         | --driver=docker                         |                                         |         |         |                     |                     |
	|         | --container-runtime=containerd          |                                         |         |         |                     |                     |
	| delete  | -p                                      | cert-expiration-20220921215524-10174    | jenkins | v1.27.0 | 21 Sep 22 21:59 UTC | 21 Sep 22 21:59 UTC |
	|         | cert-expiration-20220921215524-10174    |                                         |         |         |                     |                     |
	| start   | -p cilium-20220921215524-10174          | cilium-20220921215524-10174             | jenkins | v1.27.0 | 21 Sep 22 21:59 UTC | 21 Sep 22 22:01 UTC |
	|         | --memory=2048                           |                                         |         |         |                     |                     |
	|         | --alsologtostderr                       |                                         |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m           |                                         |         |         |                     |                     |
	|         | --cni=cilium --driver=docker            |                                         |         |         |                     |                     |
	|         | --container-runtime=containerd          |                                         |         |         |                     |                     |
	| ssh     | -p auto-20220921215523-10174            | auto-20220921215523-10174               | jenkins | v1.27.0 | 21 Sep 22 21:59 UTC | 21 Sep 22 21:59 UTC |
	|         | pgrep -a kubelet                        |                                         |         |         |                     |                     |
	| delete  | -p auto-20220921215523-10174            | auto-20220921215523-10174               | jenkins | v1.27.0 | 21 Sep 22 21:59 UTC | 21 Sep 22 21:59 UTC |
	| start   | -p calico-20220921215524-10174          | calico-20220921215524-10174             | jenkins | v1.27.0 | 21 Sep 22 21:59 UTC |                     |
	|         | --memory=2048                           |                                         |         |         |                     |                     |
	|         | --alsologtostderr                       |                                         |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m           |                                         |         |         |                     |                     |
	|         | --cni=calico --driver=docker            |                                         |         |         |                     |                     |
	|         | --container-runtime=containerd          |                                         |         |         |                     |                     |
	| ssh     | -p                                      | kindnet-20220921215523-10174            | jenkins | v1.27.0 | 21 Sep 22 21:59 UTC | 21 Sep 22 21:59 UTC |
	|         | kindnet-20220921215523-10174            |                                         |         |         |                     |                     |
	|         | pgrep -a kubelet                        |                                         |         |         |                     |                     |
	| delete  | -p                                      | kindnet-20220921215523-10174            | jenkins | v1.27.0 | 21 Sep 22 21:59 UTC | 21 Sep 22 21:59 UTC |
	|         | kindnet-20220921215523-10174            |                                         |         |         |                     |                     |
	| start   | -p                                      | enable-default-cni-20220921215523-10174 | jenkins | v1.27.0 | 21 Sep 22 21:59 UTC |                     |
	|         | enable-default-cni-20220921215523-10174 |                                         |         |         |                     |                     |
	|         | --memory=2048 --alsologtostderr         |                                         |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m           |                                         |         |         |                     |                     |
	|         | --enable-default-cni=true               |                                         |         |         |                     |                     |
	|         | --driver=docker                         |                                         |         |         |                     |                     |
	|         | --container-runtime=containerd          |                                         |         |         |                     |                     |
	| ssh     | -p cilium-20220921215524-10174          | cilium-20220921215524-10174             | jenkins | v1.27.0 | 21 Sep 22 22:01 UTC | 21 Sep 22 22:01 UTC |
	|         | pgrep -a kubelet                        |                                         |         |         |                     |                     |
	| delete  | -p cilium-20220921215524-10174          | cilium-20220921215524-10174             | jenkins | v1.27.0 | 21 Sep 22 22:01 UTC | 21 Sep 22 22:01 UTC |
	| start   | -p bridge-20220921215523-10174          | bridge-20220921215523-10174             | jenkins | v1.27.0 | 21 Sep 22 22:01 UTC | 21 Sep 22 22:01 UTC |
	|         | --memory=2048                           |                                         |         |         |                     |                     |
	|         | --alsologtostderr                       |                                         |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m           |                                         |         |         |                     |                     |
	|         | --cni=bridge --driver=docker            |                                         |         |         |                     |                     |
	|         | --container-runtime=containerd          |                                         |         |         |                     |                     |
	| ssh     | -p bridge-20220921215523-10174          | bridge-20220921215523-10174             | jenkins | v1.27.0 | 21 Sep 22 22:01 UTC | 21 Sep 22 22:01 UTC |
	|         | pgrep -a kubelet                        |                                         |         |         |                     |                     |
	|---------|-----------------------------------------|-----------------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/09/21 22:01:20
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.19.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0921 22:01:20.399992  215122 out.go:296] Setting OutFile to fd 1 ...
	I0921 22:01:20.400128  215122 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:01:20.400143  215122 out.go:309] Setting ErrFile to fd 2...
	I0921 22:01:20.400151  215122 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:01:20.400261  215122 root.go:333] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/bin
	I0921 22:01:20.400815  215122 out.go:303] Setting JSON to false
	I0921 22:01:20.402724  215122 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":2632,"bootTime":1663795049,"procs":1067,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1017-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0921 22:01:20.402792  215122 start.go:125] virtualization: kvm guest
	I0921 22:01:20.405589  215122 out.go:177] * [bridge-20220921215523-10174] minikube v1.27.0 on Ubuntu 20.04 (kvm/amd64)
	I0921 22:01:20.406830  215122 notify.go:214] Checking for updates...
	I0921 22:01:20.408353  215122 out.go:177]   - MINIKUBE_LOCATION=14995
	I0921 22:01:20.409956  215122 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0921 22:01:20.411346  215122 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
	I0921 22:01:20.412867  215122 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube
	I0921 22:01:20.414274  215122 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0921 22:01:20.415999  215122 config.go:180] Loaded profile config "calico-20220921215524-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.2
	I0921 22:01:20.416089  215122 config.go:180] Loaded profile config "enable-default-cni-20220921215523-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.2
	I0921 22:01:20.416168  215122 config.go:180] Loaded profile config "kubernetes-upgrade-20220921215522-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.2
	I0921 22:01:20.416216  215122 driver.go:365] Setting default libvirt URI to qemu:///system
	I0921 22:01:20.447353  215122 docker.go:137] docker version: linux-20.10.18
	I0921 22:01:20.447449  215122 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:01:20.543184  215122 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:true NGoroutines:57 SystemTime:2022-09-21 22:01:20.468243338 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1017-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.18 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0921 22:01:20.543395  215122 docker.go:254] overlay module found
	I0921 22:01:20.545758  215122 out.go:177] * Using the docker driver based on user configuration
	I0921 22:01:20.547135  215122 start.go:284] selected driver: docker
	I0921 22:01:20.547158  215122 start.go:808] validating driver "docker" against <nil>
	I0921 22:01:20.547185  215122 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0921 22:01:20.548120  215122 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:01:20.642575  215122 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:true NGoroutines:57 SystemTime:2022-09-21 22:01:20.569196544 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1017-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.18 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0921 22:01:20.642729  215122 start_flags.go:302] no existing cluster config was found, will generate one from the flags 
	I0921 22:01:20.642899  215122 start_flags.go:867] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0921 22:01:20.644995  215122 out.go:177] * Using Docker driver with root privileges
	I0921 22:01:20.646356  215122 cni.go:95] Creating CNI manager for "bridge"
	I0921 22:01:20.646377  215122 start_flags.go:311] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0921 22:01:20.646385  215122 start_flags.go:316] config:
	{Name:bridge-20220921215523-10174 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:bridge-20220921215523-10174 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cont
ainerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 22:01:20.647870  215122 out.go:177] * Starting control plane node bridge-20220921215523-10174 in cluster bridge-20220921215523-10174
	I0921 22:01:20.649143  215122 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0921 22:01:20.650480  215122 out.go:177] * Pulling base image ...
	I0921 22:01:20.651764  215122 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime containerd
	I0921 22:01:20.651804  215122 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.2-containerd-overlay2-amd64.tar.lz4
	I0921 22:01:20.651815  215122 cache.go:57] Caching tarball of preloaded images
	I0921 22:01:20.651860  215122 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	I0921 22:01:20.652066  215122 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0921 22:01:20.652088  215122 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.2 on containerd
	I0921 22:01:20.652199  215122 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/bridge-20220921215523-10174/config.json ...
	I0921 22:01:20.652225  215122 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/bridge-20220921215523-10174/config.json: {Name:mk27c183536830420bb2d6132b593804695deea4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:01:20.677363  215122 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon, skipping pull
	I0921 22:01:20.677390  215122 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c exists in daemon, skipping load
	I0921 22:01:20.677404  215122 cache.go:208] Successfully downloaded all kic artifacts
	I0921 22:01:20.677444  215122 start.go:364] acquiring machines lock for bridge-20220921215523-10174: {Name:mkc981b7e955f66b2bf99465d3282ba0dad6f5bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:01:20.677563  215122 start.go:368] acquired machines lock for "bridge-20220921215523-10174" in 99.083µs
	I0921 22:01:20.677587  215122 start.go:93] Provisioning new machine with config: &{Name:bridge-20220921215523-10174 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:bridge-20220921215523-10174 Namespace:default APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0921 22:01:20.677667  215122 start.go:125] createHost starting for "" (driver="docker")
	I0921 22:01:20.796308  203160 pod_ready.go:102] pod "coredns-565d847f94-hsd8z" in "kube-system" namespace has status "Ready":"False"
	I0921 22:01:22.796445  203160 pod_ready.go:102] pod "coredns-565d847f94-hsd8z" in "kube-system" namespace has status "Ready":"False"
	I0921 22:01:21.788431  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:01:24.282795  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:01:20.680580  215122 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0921 22:01:20.680792  215122 start.go:159] libmachine.API.Create for "bridge-20220921215523-10174" (driver="docker")
	I0921 22:01:20.680826  215122 client.go:168] LocalClient.Create starting
	I0921 22:01:20.680895  215122 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem
	I0921 22:01:20.680928  215122 main.go:134] libmachine: Decoding PEM data...
	I0921 22:01:20.680945  215122 main.go:134] libmachine: Parsing certificate...
	I0921 22:01:20.681015  215122 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/cert.pem
	I0921 22:01:20.681038  215122 main.go:134] libmachine: Decoding PEM data...
	I0921 22:01:20.681052  215122 main.go:134] libmachine: Parsing certificate...
	I0921 22:01:20.681458  215122 cli_runner.go:164] Run: docker network inspect bridge-20220921215523-10174 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:01:20.704178  215122 cli_runner.go:211] docker network inspect bridge-20220921215523-10174 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:01:20.704263  215122 network_create.go:272] running [docker network inspect bridge-20220921215523-10174] to gather additional debugging logs...
	I0921 22:01:20.704302  215122 cli_runner.go:164] Run: docker network inspect bridge-20220921215523-10174
	W0921 22:01:20.729823  215122 cli_runner.go:211] docker network inspect bridge-20220921215523-10174 returned with exit code 1
	I0921 22:01:20.729859  215122 network_create.go:275] error running [docker network inspect bridge-20220921215523-10174]: docker network inspect bridge-20220921215523-10174: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: bridge-20220921215523-10174
	I0921 22:01:20.729877  215122 network_create.go:277] output of [docker network inspect bridge-20220921215523-10174]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: bridge-20220921215523-10174
	
	** /stderr **
	I0921 22:01:20.729920  215122 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 22:01:20.755762  215122 network.go:241] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-b7c23e57d062 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:a3:39:9d:03}}
	I0921 22:01:20.756906  215122 network.go:241] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-bfa8cb3d5f9b IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:8c:39:36:0c}}
	I0921 22:01:20.757830  215122 network.go:241] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName:br-454b31bb712a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:f9:25:10:f3}}
	I0921 22:01:20.758735  215122 network.go:290] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.76.0:0xc0004f8098] misses:0}
	I0921 22:01:20.758771  215122 network.go:236] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 22:01:20.758784  215122 network_create.go:115] attempt to create docker network bridge-20220921215523-10174 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0921 22:01:20.758842  215122 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=bridge-20220921215523-10174 bridge-20220921215523-10174
	I0921 22:01:20.827257  215122 network_create.go:99] docker network bridge-20220921215523-10174 192.168.76.0/24 created
	I0921 22:01:20.827297  215122 kic.go:106] calculated static IP "192.168.76.2" for the "bridge-20220921215523-10174" container
	I0921 22:01:20.827361  215122 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0921 22:01:20.854549  215122 cli_runner.go:164] Run: docker volume create bridge-20220921215523-10174 --label name.minikube.sigs.k8s.io=bridge-20220921215523-10174 --label created_by.minikube.sigs.k8s.io=true
	I0921 22:01:20.879039  215122 oci.go:103] Successfully created a docker volume bridge-20220921215523-10174
	I0921 22:01:20.879114  215122 cli_runner.go:164] Run: docker run --rm --name bridge-20220921215523-10174-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-20220921215523-10174 --entrypoint /usr/bin/test -v bridge-20220921215523-10174:/var gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c -d /var/lib
	I0921 22:01:21.444486  215122 oci.go:107] Successfully prepared a docker volume bridge-20220921215523-10174
	I0921 22:01:21.444562  215122 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime containerd
	I0921 22:01:21.444582  215122 kic.go:179] Starting extracting preloaded images to volume ...
	I0921 22:01:21.444637  215122 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.2-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v bridge-20220921215523-10174:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c -I lz4 -xf /preloaded.tar -C /extractDir
	I0921 22:01:25.295077  203160 pod_ready.go:102] pod "coredns-565d847f94-hsd8z" in "kube-system" namespace has status "Ready":"False"
	I0921 22:01:27.296268  203160 pod_ready.go:102] pod "coredns-565d847f94-hsd8z" in "kube-system" namespace has status "Ready":"False"
	I0921 22:01:26.283052  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:01:28.284721  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:01:27.967769  215122 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.2-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v bridge-20220921215523-10174:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c -I lz4 -xf /preloaded.tar -C /extractDir: (6.523042715s)
	I0921 22:01:27.967801  215122 kic.go:188] duration metric: took 6.523216 seconds to extract preloaded images to volume
	W0921 22:01:27.967925  215122 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0921 22:01:27.968011  215122 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0921 22:01:28.061462  215122 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname bridge-20220921215523-10174 --name bridge-20220921215523-10174 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-20220921215523-10174 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=bridge-20220921215523-10174 --network bridge-20220921215523-10174 --ip 192.168.76.2 --volume bridge-20220921215523-10174:/var --security-opt apparmor=unconfined --memory=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c
	I0921 22:01:28.458187  215122 cli_runner.go:164] Run: docker container inspect bridge-20220921215523-10174 --format={{.State.Running}}
	I0921 22:01:28.485826  215122 cli_runner.go:164] Run: docker container inspect bridge-20220921215523-10174 --format={{.State.Status}}
	I0921 22:01:28.511954  215122 cli_runner.go:164] Run: docker exec bridge-20220921215523-10174 stat /var/lib/dpkg/alternatives/iptables
	I0921 22:01:28.572618  215122 oci.go:144] the created container "bridge-20220921215523-10174" has a running status.
	I0921 22:01:28.572655  215122 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/bridge-20220921215523-10174/id_rsa...
	I0921 22:01:28.687866  215122 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/bridge-20220921215523-10174/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0921 22:01:28.760716  215122 cli_runner.go:164] Run: docker container inspect bridge-20220921215523-10174 --format={{.State.Status}}
	I0921 22:01:28.800826  215122 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0921 22:01:28.800848  215122 kic_runner.go:114] Args: [docker exec --privileged bridge-20220921215523-10174 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0921 22:01:28.883124  215122 cli_runner.go:164] Run: docker container inspect bridge-20220921215523-10174 --format={{.State.Status}}
	I0921 22:01:28.913807  215122 machine.go:88] provisioning docker machine ...
	I0921 22:01:28.913853  215122 ubuntu.go:169] provisioning hostname "bridge-20220921215523-10174"
	I0921 22:01:28.913911  215122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220921215523-10174
	I0921 22:01:28.944411  215122 main.go:134] libmachine: Using SSH client type: native
	I0921 22:01:28.944625  215122 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ec520] 0x7ef6a0 <nil>  [] 0s} 127.0.0.1 49393 <nil> <nil>}
	I0921 22:01:28.944649  215122 main.go:134] libmachine: About to run SSH command:
	sudo hostname bridge-20220921215523-10174 && echo "bridge-20220921215523-10174" | sudo tee /etc/hostname
	I0921 22:01:29.104862  215122 main.go:134] libmachine: SSH cmd err, output: <nil>: bridge-20220921215523-10174
	
	I0921 22:01:29.104937  215122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220921215523-10174
	I0921 22:01:29.131408  215122 main.go:134] libmachine: Using SSH client type: native
	I0921 22:01:29.131557  215122 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ec520] 0x7ef6a0 <nil>  [] 0s} 127.0.0.1 49393 <nil> <nil>}
	I0921 22:01:29.131579  215122 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-20220921215523-10174' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-20220921215523-10174/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-20220921215523-10174' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0921 22:01:29.259477  215122 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0921 22:01:29.259512  215122 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube}
	I0921 22:01:29.259536  215122 ubuntu.go:177] setting up certificates
	I0921 22:01:29.259544  215122 provision.go:83] configureAuth start
	I0921 22:01:29.259591  215122 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" bridge-20220921215523-10174
	I0921 22:01:29.283351  215122 provision.go:138] copyHostCerts
	I0921 22:01:29.283422  215122 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.pem, removing ...
	I0921 22:01:29.283434  215122 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.pem
	I0921 22:01:29.283513  215122 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.pem (1078 bytes)
	I0921 22:01:29.283627  215122 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cert.pem, removing ...
	I0921 22:01:29.283645  215122 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cert.pem
	I0921 22:01:29.283684  215122 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cert.pem (1123 bytes)
	I0921 22:01:29.283807  215122 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/key.pem, removing ...
	I0921 22:01:29.283823  215122 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/key.pem
	I0921 22:01:29.283858  215122 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/key.pem (1675 bytes)
	I0921 22:01:29.283925  215122 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca-key.pem org=jenkins.bridge-20220921215523-10174 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube bridge-20220921215523-10174]
	I0921 22:01:29.347391  215122 provision.go:172] copyRemoteCerts
	I0921 22:01:29.347446  215122 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0921 22:01:29.347478  215122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220921215523-10174
	I0921 22:01:29.373584  215122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49393 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/bridge-20220921215523-10174/id_rsa Username:docker}
	I0921 22:01:29.467228  215122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0921 22:01:29.484247  215122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0921 22:01:29.501261  215122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0921 22:01:29.518060  215122 provision.go:86] duration metric: configureAuth took 258.507279ms
	I0921 22:01:29.518084  215122 ubuntu.go:193] setting minikube options for container-runtime
	I0921 22:01:29.518230  215122 config.go:180] Loaded profile config "bridge-20220921215523-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.2
	I0921 22:01:29.518241  215122 machine.go:91] provisioned docker machine in 604.41268ms
	I0921 22:01:29.518246  215122 client.go:171] LocalClient.Create took 8.837412157s
	I0921 22:01:29.518260  215122 start.go:167] duration metric: libmachine.API.Create for "bridge-20220921215523-10174" took 8.837469774s
	I0921 22:01:29.518270  215122 start.go:300] post-start starting for "bridge-20220921215523-10174" (driver="docker")
	I0921 22:01:29.518276  215122 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0921 22:01:29.518315  215122 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0921 22:01:29.518356  215122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220921215523-10174
	I0921 22:01:29.542683  215122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49393 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/bridge-20220921215523-10174/id_rsa Username:docker}
	I0921 22:01:29.635262  215122 ssh_runner.go:195] Run: cat /etc/os-release
	I0921 22:01:29.638009  215122 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0921 22:01:29.638037  215122 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0921 22:01:29.638048  215122 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0921 22:01:29.638054  215122 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0921 22:01:29.638064  215122 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/addons for local assets ...
	I0921 22:01:29.638117  215122 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files for local assets ...
	I0921 22:01:29.638196  215122 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/101742.pem -> 101742.pem in /etc/ssl/certs
	I0921 22:01:29.638279  215122 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0921 22:01:29.644987  215122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/101742.pem --> /etc/ssl/certs/101742.pem (1708 bytes)
	I0921 22:01:29.662057  215122 start.go:303] post-start completed in 143.775945ms
	I0921 22:01:29.662374  215122 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" bridge-20220921215523-10174
	I0921 22:01:29.686087  215122 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/bridge-20220921215523-10174/config.json ...
	I0921 22:01:29.686359  215122 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:01:29.686401  215122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220921215523-10174
	I0921 22:01:29.712755  215122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49393 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/bridge-20220921215523-10174/id_rsa Username:docker}
	I0921 22:01:29.800243  215122 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:01:29.804306  215122 start.go:128] duration metric: createHost completed in 9.126616648s
	I0921 22:01:29.804329  215122 start.go:83] releasing machines lock for "bridge-20220921215523-10174", held for 9.126753984s
	I0921 22:01:29.804398  215122 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" bridge-20220921215523-10174
	I0921 22:01:29.828357  215122 ssh_runner.go:195] Run: systemctl --version
	I0921 22:01:29.828401  215122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220921215523-10174
	I0921 22:01:29.828440  215122 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0921 22:01:29.828524  215122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220921215523-10174
	I0921 22:01:29.853434  215122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49393 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/bridge-20220921215523-10174/id_rsa Username:docker}
	I0921 22:01:29.854844  215122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49393 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/bridge-20220921215523-10174/id_rsa Username:docker}
	I0921 22:01:29.971953  215122 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0921 22:01:29.981882  215122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0921 22:01:29.991059  215122 docker.go:188] disabling docker service ...
	I0921 22:01:29.991132  215122 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0921 22:01:30.007744  215122 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0921 22:01:30.016621  215122 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0921 22:01:30.108331  215122 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0921 22:01:30.182311  215122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0921 22:01:30.191307  215122 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0921 22:01:30.203641  215122 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "registry.k8s.io/pause:3.8"|' -i /etc/containerd/config.toml"
	I0921 22:01:30.211170  215122 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0921 22:01:30.218912  215122 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0921 22:01:30.226199  215122 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.d"|' -i /etc/containerd/config.toml"
	I0921 22:01:30.233859  215122 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0921 22:01:30.240077  215122 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0921 22:01:30.246033  215122 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0921 22:01:30.316300  215122 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0921 22:01:30.388357  215122 start.go:450] Will wait 60s for socket path /run/containerd/containerd.sock
	I0921 22:01:30.388430  215122 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0921 22:01:30.391887  215122 start.go:471] Will wait 60s for crictl version
	I0921 22:01:30.391949  215122 ssh_runner.go:195] Run: sudo crictl version
	I0921 22:01:30.419762  215122 start.go:480] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.8
	RuntimeApiVersion:  v1alpha2
	I0921 22:01:30.419824  215122 ssh_runner.go:195] Run: containerd --version
	I0921 22:01:30.449680  215122 ssh_runner.go:195] Run: containerd --version
	I0921 22:01:30.481291  215122 out.go:177] * Preparing Kubernetes v1.25.2 on containerd 1.6.8 ...
	I0921 22:01:30.482875  215122 cli_runner.go:164] Run: docker network inspect bridge-20220921215523-10174 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 22:01:30.507780  215122 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0921 22:01:30.510930  215122 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0921 22:01:30.520463  215122 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime containerd
	I0921 22:01:30.520525  215122 ssh_runner.go:195] Run: sudo crictl images --output json
	I0921 22:01:30.544218  215122 containerd.go:553] all images are preloaded for containerd runtime.
	I0921 22:01:30.544244  215122 containerd.go:467] Images already preloaded, skipping extraction
	I0921 22:01:30.544298  215122 ssh_runner.go:195] Run: sudo crictl images --output json
	I0921 22:01:30.567882  215122 containerd.go:553] all images are preloaded for containerd runtime.
	I0921 22:01:30.567905  215122 cache_images.go:84] Images are preloaded, skipping loading
	I0921 22:01:30.567945  215122 ssh_runner.go:195] Run: sudo crictl info
	I0921 22:01:30.591632  215122 cni.go:95] Creating CNI manager for "bridge"
	I0921 22:01:30.591658  215122 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0921 22:01:30.591672  215122 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.25.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:bridge-20220921215523-10174 NodeName:bridge-20220921215523-10174 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
	I0921 22:01:30.591815  215122 kubeadm.go:161] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "bridge-20220921215523-10174"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0921 22:01:30.591902  215122 kubeadm.go:962] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=bridge-20220921215523-10174 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.2 ClusterName:bridge-20220921215523-10174 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:}
	I0921 22:01:30.591948  215122 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.2
	I0921 22:01:30.599049  215122 binaries.go:44] Found k8s binaries, skipping transfer
	I0921 22:01:30.599104  215122 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0921 22:01:30.605713  215122 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (520 bytes)
	I0921 22:01:30.618251  215122 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0921 22:01:30.630673  215122 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2056 bytes)
	I0921 22:01:30.642757  215122 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0921 22:01:30.645493  215122 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0921 22:01:30.653938  215122 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/bridge-20220921215523-10174 for IP: 192.168.76.2
	I0921 22:01:30.654035  215122 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.key
	I0921 22:01:30.654075  215122 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/proxy-client-ca.key
	I0921 22:01:30.654126  215122 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/bridge-20220921215523-10174/client.key
	I0921 22:01:30.654140  215122 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/bridge-20220921215523-10174/client.crt with IP's: []
	I0921 22:01:30.717387  215122 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/bridge-20220921215523-10174/client.crt ...
	I0921 22:01:30.717416  215122 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/bridge-20220921215523-10174/client.crt: {Name:mk43aabc1da4bac55bcffe901401521f4aea0e91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:01:30.717609  215122 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/bridge-20220921215523-10174/client.key ...
	I0921 22:01:30.717627  215122 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/bridge-20220921215523-10174/client.key: {Name:mke49682624e380c4693e416e37baf3a4bca8b82 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:01:30.717729  215122 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/bridge-20220921215523-10174/apiserver.key.31bdca25
	I0921 22:01:30.717746  215122 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/bridge-20220921215523-10174/apiserver.crt.31bdca25 with IP's: [192.168.76.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0921 22:01:30.903254  215122 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/bridge-20220921215523-10174/apiserver.crt.31bdca25 ...
	I0921 22:01:30.903292  215122 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/bridge-20220921215523-10174/apiserver.crt.31bdca25: {Name:mk6bf35dba38a8b20d945a06965afc655853b287 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:01:30.903523  215122 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/bridge-20220921215523-10174/apiserver.key.31bdca25 ...
	I0921 22:01:30.903551  215122 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/bridge-20220921215523-10174/apiserver.key.31bdca25: {Name:mke9ddfe2a0e862817321007cda24dba740f7c0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:01:30.903705  215122 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/bridge-20220921215523-10174/apiserver.crt.31bdca25 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/bridge-20220921215523-10174/apiserver.crt
	I0921 22:01:30.903836  215122 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/bridge-20220921215523-10174/apiserver.key.31bdca25 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/bridge-20220921215523-10174/apiserver.key
	I0921 22:01:30.903894  215122 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/bridge-20220921215523-10174/proxy-client.key
	I0921 22:01:30.903911  215122 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/bridge-20220921215523-10174/proxy-client.crt with IP's: []
	I0921 22:01:31.104376  215122 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/bridge-20220921215523-10174/proxy-client.crt ...
	I0921 22:01:31.104418  215122 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/bridge-20220921215523-10174/proxy-client.crt: {Name:mk3d1852a42d99d3174c0629c6f8a1b67ab3049f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:01:31.104623  215122 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/bridge-20220921215523-10174/proxy-client.key ...
	I0921 22:01:31.104638  215122 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/bridge-20220921215523-10174/proxy-client.key: {Name:mka948a198a0065642897187120a9ce1a1f48079 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:01:31.104811  215122 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/10174.pem (1338 bytes)
	W0921 22:01:31.104849  215122 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/10174_empty.pem, impossibly tiny 0 bytes
	I0921 22:01:31.104863  215122 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca-key.pem (1679 bytes)
	I0921 22:01:31.104887  215122 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem (1078 bytes)
	I0921 22:01:31.104912  215122 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/cert.pem (1123 bytes)
	I0921 22:01:31.104934  215122 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/key.pem (1675 bytes)
	I0921 22:01:31.104971  215122 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/101742.pem (1708 bytes)
	I0921 22:01:31.105464  215122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/bridge-20220921215523-10174/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0921 22:01:31.123583  215122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/bridge-20220921215523-10174/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0921 22:01:31.140587  215122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/bridge-20220921215523-10174/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0921 22:01:31.157520  215122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/bridge-20220921215523-10174/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0921 22:01:31.174427  215122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0921 22:01:31.191183  215122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0921 22:01:31.208210  215122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0921 22:01:31.224755  215122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0921 22:01:31.242006  215122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/101742.pem --> /usr/share/ca-certificates/101742.pem (1708 bytes)
	I0921 22:01:31.259325  215122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0921 22:01:31.275913  215122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/10174.pem --> /usr/share/ca-certificates/10174.pem (1338 bytes)
	I0921 22:01:31.293352  215122 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0921 22:01:31.306447  215122 ssh_runner.go:195] Run: openssl version
	I0921 22:01:31.311131  215122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0921 22:01:31.318135  215122 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0921 22:01:31.321067  215122 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Sep 21 21:28 /usr/share/ca-certificates/minikubeCA.pem
	I0921 22:01:31.321106  215122 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0921 22:01:31.325758  215122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0921 22:01:31.332811  215122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10174.pem && ln -fs /usr/share/ca-certificates/10174.pem /etc/ssl/certs/10174.pem"
	I0921 22:01:31.340266  215122 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10174.pem
	I0921 22:01:31.343153  215122 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Sep 21 21:32 /usr/share/ca-certificates/10174.pem
	I0921 22:01:31.343208  215122 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10174.pem
	I0921 22:01:31.347987  215122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10174.pem /etc/ssl/certs/51391683.0"
	I0921 22:01:31.354868  215122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/101742.pem && ln -fs /usr/share/ca-certificates/101742.pem /etc/ssl/certs/101742.pem"
	I0921 22:01:31.362081  215122 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/101742.pem
	I0921 22:01:31.364960  215122 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Sep 21 21:32 /usr/share/ca-certificates/101742.pem
	I0921 22:01:31.364997  215122 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/101742.pem
	I0921 22:01:31.369540  215122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/101742.pem /etc/ssl/certs/3ec20f2e.0"
	I0921 22:01:31.376545  215122 kubeadm.go:396] StartCluster: {Name:bridge-20220921215523-10174 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:bridge-20220921215523-10174 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 22:01:31.376625  215122 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0921 22:01:31.376654  215122 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0921 22:01:31.400010  215122 cri.go:87] found id: ""
	I0921 22:01:31.400057  215122 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0921 22:01:31.406974  215122 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0921 22:01:31.413564  215122 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0921 22:01:31.413604  215122 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0921 22:01:31.420012  215122 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0921 22:01:31.420051  215122 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0921 22:01:31.460421  215122 kubeadm.go:317] [init] Using Kubernetes version: v1.25.2
	I0921 22:01:31.460512  215122 kubeadm.go:317] [preflight] Running pre-flight checks
	I0921 22:01:31.487776  215122 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I0921 22:01:31.487888  215122 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1017-gcp
	I0921 22:01:31.487954  215122 kubeadm.go:317] OS: Linux
	I0921 22:01:31.488026  215122 kubeadm.go:317] CGROUPS_CPU: enabled
	I0921 22:01:31.488104  215122 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I0921 22:01:31.488168  215122 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I0921 22:01:31.488235  215122 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I0921 22:01:31.488291  215122 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I0921 22:01:31.488353  215122 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I0921 22:01:31.488423  215122 kubeadm.go:317] CGROUPS_PIDS: enabled
	I0921 22:01:31.488515  215122 kubeadm.go:317] CGROUPS_HUGETLB: enabled
	I0921 22:01:31.488577  215122 kubeadm.go:317] CGROUPS_BLKIO: enabled
	I0921 22:01:31.552193  215122 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0921 22:01:31.552354  215122 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0921 22:01:31.552493  215122 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0921 22:01:31.673966  215122 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0921 22:01:31.677138  215122 out.go:204]   - Generating certificates and keys ...
	I0921 22:01:31.677275  215122 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0921 22:01:31.677362  215122 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0921 22:01:31.927942  215122 kubeadm.go:317] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0921 22:01:32.076422  215122 kubeadm.go:317] [certs] Generating "front-proxy-ca" certificate and key
	I0921 22:01:32.186739  215122 kubeadm.go:317] [certs] Generating "front-proxy-client" certificate and key
	I0921 22:01:32.514364  215122 kubeadm.go:317] [certs] Generating "etcd/ca" certificate and key
	I0921 22:01:32.655578  215122 kubeadm.go:317] [certs] Generating "etcd/server" certificate and key
	I0921 22:01:32.655808  215122 kubeadm.go:317] [certs] etcd/server serving cert is signed for DNS names [bridge-20220921215523-10174 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0921 22:01:32.759413  215122 kubeadm.go:317] [certs] Generating "etcd/peer" certificate and key
	I0921 22:01:32.759668  215122 kubeadm.go:317] [certs] etcd/peer serving cert is signed for DNS names [bridge-20220921215523-10174 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0921 22:01:32.896993  215122 kubeadm.go:317] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0921 22:01:33.198251  215122 kubeadm.go:317] [certs] Generating "apiserver-etcd-client" certificate and key
	I0921 22:01:33.422103  215122 kubeadm.go:317] [certs] Generating "sa" key and public key
	I0921 22:01:33.422230  215122 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0921 22:01:33.656334  215122 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0921 22:01:33.773030  215122 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0921 22:01:33.908387  215122 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0921 22:01:34.027812  215122 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0921 22:01:34.039137  215122 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0921 22:01:34.040753  215122 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0921 22:01:34.040864  215122 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I0921 22:01:34.127129  215122 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0921 22:01:29.794678  203160 pod_ready.go:102] pod "coredns-565d847f94-hsd8z" in "kube-system" namespace has status "Ready":"False"
	I0921 22:01:31.795317  203160 pod_ready.go:102] pod "coredns-565d847f94-hsd8z" in "kube-system" namespace has status "Ready":"False"
	I0921 22:01:34.296071  203160 pod_ready.go:102] pod "coredns-565d847f94-hsd8z" in "kube-system" namespace has status "Ready":"False"
	I0921 22:01:30.783006  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:01:32.786124  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:01:34.130675  215122 out.go:204]   - Booting up control plane ...
	I0921 22:01:34.130824  215122 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0921 22:01:34.130931  215122 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0921 22:01:34.131032  215122 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0921 22:01:34.131566  215122 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0921 22:01:34.133233  215122 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0921 22:01:36.298594  203160 pod_ready.go:102] pod "coredns-565d847f94-hsd8z" in "kube-system" namespace has status "Ready":"False"
	I0921 22:01:38.800929  203160 pod_ready.go:102] pod "coredns-565d847f94-hsd8z" in "kube-system" namespace has status "Ready":"False"
	I0921 22:01:35.284259  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:01:37.286508  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:01:39.783829  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:01:40.135581  215122 kubeadm.go:317] [apiclient] All control plane components are healthy after 6.002264 seconds
	I0921 22:01:40.135792  215122 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0921 22:01:40.143652  215122 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0921 22:01:40.658118  215122 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs
	I0921 22:01:40.658320  215122 kubeadm.go:317] [mark-control-plane] Marking the node bridge-20220921215523-10174 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0921 22:01:41.165185  215122 kubeadm.go:317] [bootstrap-token] Using token: aqku34.qf5z1n8skf8o3ddv
	I0921 22:01:41.167890  215122 out.go:204]   - Configuring RBAC rules ...
	I0921 22:01:41.168031  215122 kubeadm.go:317] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0921 22:01:41.169792  215122 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0921 22:01:41.174042  215122 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0921 22:01:41.176023  215122 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0921 22:01:41.177813  215122 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0921 22:01:41.179505  215122 kubeadm.go:317] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0921 22:01:41.186227  215122 kubeadm.go:317] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0921 22:01:41.390399  215122 kubeadm.go:317] [addons] Applied essential addon: CoreDNS
	I0921 22:01:41.581750  215122 kubeadm.go:317] [addons] Applied essential addon: kube-proxy
	I0921 22:01:41.583080  215122 kubeadm.go:317] 
	I0921 22:01:41.583170  215122 kubeadm.go:317] Your Kubernetes control-plane has initialized successfully!
	I0921 22:01:41.583183  215122 kubeadm.go:317] 
	I0921 22:01:41.583281  215122 kubeadm.go:317] To start using your cluster, you need to run the following as a regular user:
	I0921 22:01:41.583311  215122 kubeadm.go:317] 
	I0921 22:01:41.583351  215122 kubeadm.go:317]   mkdir -p $HOME/.kube
	I0921 22:01:41.583432  215122 kubeadm.go:317]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0921 22:01:41.583512  215122 kubeadm.go:317]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0921 22:01:41.583545  215122 kubeadm.go:317] 
	I0921 22:01:41.583629  215122 kubeadm.go:317] Alternatively, if you are the root user, you can run:
	I0921 22:01:41.583641  215122 kubeadm.go:317] 
	I0921 22:01:41.583697  215122 kubeadm.go:317]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0921 22:01:41.583707  215122 kubeadm.go:317] 
	I0921 22:01:41.583848  215122 kubeadm.go:317] You should now deploy a pod network to the cluster.
	I0921 22:01:41.583951  215122 kubeadm.go:317] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0921 22:01:41.584038  215122 kubeadm.go:317]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0921 22:01:41.584048  215122 kubeadm.go:317] 
	I0921 22:01:41.584138  215122 kubeadm.go:317] You can now join any number of control-plane nodes by copying certificate authorities
	I0921 22:01:41.584233  215122 kubeadm.go:317] and service account keys on each node and then running the following as root:
	I0921 22:01:41.584244  215122 kubeadm.go:317] 
	I0921 22:01:41.584332  215122 kubeadm.go:317]   kubeadm join control-plane.minikube.internal:8443 --token aqku34.qf5z1n8skf8o3ddv \
	I0921 22:01:41.584460  215122 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:b419e756666ce2e30bdac31039f21ad7242d26e2b9c03a55c5bc6fe8dcfddcf7 \
	I0921 22:01:41.584500  215122 kubeadm.go:317] 	--control-plane 
	I0921 22:01:41.584514  215122 kubeadm.go:317] 
	I0921 22:01:41.584614  215122 kubeadm.go:317] Then you can join any number of worker nodes by running the following on each as root:
	I0921 22:01:41.584624  215122 kubeadm.go:317] 
	I0921 22:01:41.584710  215122 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8443 --token aqku34.qf5z1n8skf8o3ddv \
	I0921 22:01:41.584824  215122 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:b419e756666ce2e30bdac31039f21ad7242d26e2b9c03a55c5bc6fe8dcfddcf7 
	I0921 22:01:41.587861  215122 kubeadm.go:317] W0921 22:01:31.452977     737 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0921 22:01:41.588099  215122 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1017-gcp\n", err: exit status 1
	I0921 22:01:41.588226  215122 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0921 22:01:41.588268  215122 cni.go:95] Creating CNI manager for "bridge"
	I0921 22:01:41.590760  215122 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0921 22:01:41.295390  203160 pod_ready.go:102] pod "coredns-565d847f94-hsd8z" in "kube-system" namespace has status "Ready":"False"
	I0921 22:01:43.796834  203160 pod_ready.go:102] pod "coredns-565d847f94-hsd8z" in "kube-system" namespace has status "Ready":"False"
	I0921 22:01:41.787421  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:01:44.283230  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:01:41.592344  215122 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0921 22:01:41.601450  215122 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0921 22:01:41.676889  215122 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0921 22:01:41.676983  215122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:01:41.676984  215122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl label nodes minikube.k8s.io/version=v1.27.0 minikube.k8s.io/commit=937c68716dfaac5b5ffa3b6655158d5d3472b8c4 minikube.k8s.io/name=bridge-20220921215523-10174 minikube.k8s.io/updated_at=2022_09_21T22_01_41_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:01:41.686757  215122 ops.go:34] apiserver oom_adj: -16
	I0921 22:01:41.811777  215122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:01:42.436985  215122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:01:42.936471  215122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:01:43.437449  215122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:01:43.936528  215122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:01:44.436919  215122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:01:44.936555  215122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:01:45.796921  203160 pod_ready.go:102] pod "coredns-565d847f94-hsd8z" in "kube-system" namespace has status "Ready":"False"
	I0921 22:01:48.296174  203160 pod_ready.go:102] pod "coredns-565d847f94-hsd8z" in "kube-system" namespace has status "Ready":"False"
	I0921 22:01:46.785358  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:01:49.283231  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:01:45.437123  215122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:01:45.936578  215122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:01:46.436936  215122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:01:46.936921  215122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:01:47.436757  215122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:01:47.936741  215122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:01:48.436653  215122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:01:48.937368  215122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:01:49.436939  215122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:01:49.936924  215122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:01:50.296253  203160 pod_ready.go:102] pod "coredns-565d847f94-hsd8z" in "kube-system" namespace has status "Ready":"False"
	I0921 22:01:52.795419  203160 pod_ready.go:102] pod "coredns-565d847f94-hsd8z" in "kube-system" namespace has status "Ready":"False"
	I0921 22:01:51.784647  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:01:53.785717  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:01:50.437167  215122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:01:50.936941  215122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:01:51.436582  215122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:01:51.936930  215122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:01:52.436939  215122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:01:52.937410  215122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:01:53.436577  215122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:01:53.936955  215122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:01:54.436802  215122 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:01:54.509936  215122 kubeadm.go:1067] duration metric: took 12.833017338s to wait for elevateKubeSystemPrivileges.
	I0921 22:01:54.509973  215122 kubeadm.go:398] StartCluster complete in 23.13343596s
	I0921 22:01:54.509994  215122 settings.go:142] acquiring lock: {Name:mk2e017b0c75e33bad2b3a546d00214b38a21694 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:01:54.510181  215122 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
	I0921 22:01:54.512644  215122 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig: {Name:mkdec5faf1bb182b50a3cd5458b352f38075dc15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:01:55.033091  215122 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "bridge-20220921215523-10174" rescaled to 1
	I0921 22:01:55.033167  215122 start.go:211] Will wait 5m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0921 22:01:55.036011  215122 out.go:177] * Verifying Kubernetes components...
	I0921 22:01:55.033220  215122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0921 22:01:55.033256  215122 addons.go:412] enableAddons start: toEnable=map[], additional=[]
	I0921 22:01:55.033396  215122 config.go:180] Loaded profile config "bridge-20220921215523-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.2
	I0921 22:01:55.037672  215122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0921 22:01:55.037838  215122 addons.go:65] Setting storage-provisioner=true in profile "bridge-20220921215523-10174"
	I0921 22:01:55.037857  215122 addons.go:153] Setting addon storage-provisioner=true in "bridge-20220921215523-10174"
	W0921 22:01:55.037864  215122 addons.go:162] addon storage-provisioner should already be in state true
	I0921 22:01:55.037932  215122 host.go:66] Checking if "bridge-20220921215523-10174" exists ...
	I0921 22:01:55.038499  215122 cli_runner.go:164] Run: docker container inspect bridge-20220921215523-10174 --format={{.State.Status}}
	I0921 22:01:55.038568  215122 addons.go:65] Setting default-storageclass=true in profile "bridge-20220921215523-10174"
	I0921 22:01:55.038594  215122 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "bridge-20220921215523-10174"
	I0921 22:01:55.038875  215122 cli_runner.go:164] Run: docker container inspect bridge-20220921215523-10174 --format={{.State.Status}}
	I0921 22:01:55.068033  215122 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0921 22:01:55.069630  215122 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0921 22:01:55.069649  215122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0921 22:01:55.069700  215122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220921215523-10174
	I0921 22:01:55.076739  215122 addons.go:153] Setting addon default-storageclass=true in "bridge-20220921215523-10174"
	W0921 22:01:55.076769  215122 addons.go:162] addon default-storageclass should already be in state true
	I0921 22:01:55.076798  215122 host.go:66] Checking if "bridge-20220921215523-10174" exists ...
	I0921 22:01:55.077281  215122 cli_runner.go:164] Run: docker container inspect bridge-20220921215523-10174 --format={{.State.Status}}
	I0921 22:01:55.111072  215122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49393 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/bridge-20220921215523-10174/id_rsa Username:docker}
	I0921 22:01:55.115635  215122 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0921 22:01:55.115663  215122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0921 22:01:55.115836  215122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-20220921215523-10174
	I0921 22:01:55.143012  215122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49393 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/bridge-20220921215523-10174/id_rsa Username:docker}
	I0921 22:01:55.206059  215122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0921 22:01:55.207286  215122 node_ready.go:35] waiting up to 5m0s for node "bridge-20220921215523-10174" to be "Ready" ...
	I0921 22:01:55.210245  215122 node_ready.go:49] node "bridge-20220921215523-10174" has status "Ready":"True"
	I0921 22:01:55.210270  215122 node_ready.go:38] duration metric: took 2.956209ms waiting for node "bridge-20220921215523-10174" to be "Ready" ...
	I0921 22:01:55.210283  215122 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0921 22:01:55.217942  215122 pod_ready.go:78] waiting up to 5m0s for pod "coredns-565d847f94-bqxgb" in "kube-system" namespace to be "Ready" ...
	I0921 22:01:55.295379  215122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0921 22:01:55.395147  215122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0921 22:01:56.598791  215122 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.392690906s)
	I0921 22:01:56.598825  215122 start.go:810] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS
	I0921 22:01:56.703574  215122 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.408152473s)
	I0921 22:01:56.703625  215122 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.308438079s)
	I0921 22:01:56.705275  215122 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0921 22:01:56.706487  215122 addons.go:414] enableAddons completed in 1.673269376s
	I0921 22:01:57.225568  215122 pod_ready.go:97] error getting pod "coredns-565d847f94-bqxgb" in "kube-system" namespace (skipping!): pods "coredns-565d847f94-bqxgb" not found
	I0921 22:01:57.225607  215122 pod_ready.go:81] duration metric: took 2.007638897s waiting for pod "coredns-565d847f94-bqxgb" in "kube-system" namespace to be "Ready" ...
	E0921 22:01:57.225619  215122 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-565d847f94-bqxgb" in "kube-system" namespace (skipping!): pods "coredns-565d847f94-bqxgb" not found
	I0921 22:01:57.225628  215122 pod_ready.go:78] waiting up to 5m0s for pod "coredns-565d847f94-fxxks" in "kube-system" namespace to be "Ready" ...
	I0921 22:01:57.230414  215122 pod_ready.go:92] pod "coredns-565d847f94-fxxks" in "kube-system" namespace has status "Ready":"True"
	I0921 22:01:57.230440  215122 pod_ready.go:81] duration metric: took 4.802284ms waiting for pod "coredns-565d847f94-fxxks" in "kube-system" namespace to be "Ready" ...
	I0921 22:01:57.230454  215122 pod_ready.go:78] waiting up to 5m0s for pod "etcd-bridge-20220921215523-10174" in "kube-system" namespace to be "Ready" ...
	I0921 22:01:57.234767  215122 pod_ready.go:92] pod "etcd-bridge-20220921215523-10174" in "kube-system" namespace has status "Ready":"True"
	I0921 22:01:57.234786  215122 pod_ready.go:81] duration metric: took 4.3247ms waiting for pod "etcd-bridge-20220921215523-10174" in "kube-system" namespace to be "Ready" ...
	I0921 22:01:57.234794  215122 pod_ready.go:78] waiting up to 5m0s for pod "kube-apiserver-bridge-20220921215523-10174" in "kube-system" namespace to be "Ready" ...
	I0921 22:01:57.238841  215122 pod_ready.go:92] pod "kube-apiserver-bridge-20220921215523-10174" in "kube-system" namespace has status "Ready":"True"
	I0921 22:01:57.238865  215122 pod_ready.go:81] duration metric: took 4.061603ms waiting for pod "kube-apiserver-bridge-20220921215523-10174" in "kube-system" namespace to be "Ready" ...
	I0921 22:01:57.238879  215122 pod_ready.go:78] waiting up to 5m0s for pod "kube-controller-manager-bridge-20220921215523-10174" in "kube-system" namespace to be "Ready" ...
	I0921 22:01:57.242961  215122 pod_ready.go:92] pod "kube-controller-manager-bridge-20220921215523-10174" in "kube-system" namespace has status "Ready":"True"
	I0921 22:01:57.242977  215122 pod_ready.go:81] duration metric: took 4.091538ms waiting for pod "kube-controller-manager-bridge-20220921215523-10174" in "kube-system" namespace to be "Ready" ...
	I0921 22:01:57.242988  215122 pod_ready.go:78] waiting up to 5m0s for pod "kube-proxy-l98hk" in "kube-system" namespace to be "Ready" ...
	I0921 22:01:57.426825  215122 pod_ready.go:92] pod "kube-proxy-l98hk" in "kube-system" namespace has status "Ready":"True"
	I0921 22:01:57.426846  215122 pod_ready.go:81] duration metric: took 183.851206ms waiting for pod "kube-proxy-l98hk" in "kube-system" namespace to be "Ready" ...
	I0921 22:01:57.426855  215122 pod_ready.go:78] waiting up to 5m0s for pod "kube-scheduler-bridge-20220921215523-10174" in "kube-system" namespace to be "Ready" ...
	I0921 22:01:57.826694  215122 pod_ready.go:92] pod "kube-scheduler-bridge-20220921215523-10174" in "kube-system" namespace has status "Ready":"True"
	I0921 22:01:57.826717  215122 pod_ready.go:81] duration metric: took 399.856935ms waiting for pod "kube-scheduler-bridge-20220921215523-10174" in "kube-system" namespace to be "Ready" ...
	I0921 22:01:57.826725  215122 pod_ready.go:38] duration metric: took 2.616430261s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0921 22:01:57.826742  215122 api_server.go:51] waiting for apiserver process to appear ...
	I0921 22:01:57.826786  215122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 22:01:57.837015  215122 api_server.go:71] duration metric: took 2.803807751s to wait for apiserver process to appear ...
	I0921 22:01:57.837046  215122 api_server.go:87] waiting for apiserver healthz status ...
	I0921 22:01:57.837056  215122 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0921 22:01:57.842917  215122 api_server.go:266] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0921 22:01:57.843710  215122 api_server.go:140] control plane version: v1.25.2
	I0921 22:01:57.843771  215122 api_server.go:130] duration metric: took 6.71763ms to wait for apiserver health ...
	I0921 22:01:57.843781  215122 system_pods.go:43] waiting for kube-system pods to appear ...
	I0921 22:01:58.028997  215122 system_pods.go:59] 7 kube-system pods found
	I0921 22:01:58.029048  215122 system_pods.go:61] "coredns-565d847f94-fxxks" [d6f9c23e-6e19-4508-b035-add7e2e4254c] Running
	I0921 22:01:58.029059  215122 system_pods.go:61] "etcd-bridge-20220921215523-10174" [32f2917b-da4f-412a-8102-0e356ea4cf2d] Running
	I0921 22:01:58.029066  215122 system_pods.go:61] "kube-apiserver-bridge-20220921215523-10174" [b17f870a-7301-4606-afa6-ea69e5c53dde] Running
	I0921 22:01:58.029073  215122 system_pods.go:61] "kube-controller-manager-bridge-20220921215523-10174" [ce2ffc4f-bf2a-4196-a9a6-8eecc8f17191] Running
	I0921 22:01:58.029079  215122 system_pods.go:61] "kube-proxy-l98hk" [bd75f59e-4881-4ebf-ad5d-148ad1c5c538] Running
	I0921 22:01:58.029097  215122 system_pods.go:61] "kube-scheduler-bridge-20220921215523-10174" [c6a289e9-7646-4bf5-a10d-76a8bbeb05ca] Running
	I0921 22:01:58.029108  215122 system_pods.go:61] "storage-provisioner" [27990257-9062-4409-b438-527977853a49] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0921 22:01:58.029125  215122 system_pods.go:74] duration metric: took 185.331727ms to wait for pod list to return data ...
	I0921 22:01:58.029140  215122 default_sa.go:34] waiting for default service account to be created ...
	I0921 22:01:58.225893  215122 default_sa.go:45] found service account: "default"
	I0921 22:01:58.225918  215122 default_sa.go:55] duration metric: took 196.771614ms for default service account to be created ...
	I0921 22:01:58.225926  215122 system_pods.go:116] waiting for k8s-apps to be running ...
	I0921 22:01:58.428204  215122 system_pods.go:86] 7 kube-system pods found
	I0921 22:01:58.428234  215122 system_pods.go:89] "coredns-565d847f94-fxxks" [d6f9c23e-6e19-4508-b035-add7e2e4254c] Running
	I0921 22:01:58.428240  215122 system_pods.go:89] "etcd-bridge-20220921215523-10174" [32f2917b-da4f-412a-8102-0e356ea4cf2d] Running
	I0921 22:01:58.428245  215122 system_pods.go:89] "kube-apiserver-bridge-20220921215523-10174" [b17f870a-7301-4606-afa6-ea69e5c53dde] Running
	I0921 22:01:58.428250  215122 system_pods.go:89] "kube-controller-manager-bridge-20220921215523-10174" [ce2ffc4f-bf2a-4196-a9a6-8eecc8f17191] Running
	I0921 22:01:58.428254  215122 system_pods.go:89] "kube-proxy-l98hk" [bd75f59e-4881-4ebf-ad5d-148ad1c5c538] Running
	I0921 22:01:58.428260  215122 system_pods.go:89] "kube-scheduler-bridge-20220921215523-10174" [c6a289e9-7646-4bf5-a10d-76a8bbeb05ca] Running
	I0921 22:01:58.428264  215122 system_pods.go:89] "storage-provisioner" [27990257-9062-4409-b438-527977853a49] Running
	I0921 22:01:58.428270  215122 system_pods.go:126] duration metric: took 202.339794ms to wait for k8s-apps to be running ...
	I0921 22:01:58.428284  215122 system_svc.go:44] waiting for kubelet service to be running ....
	I0921 22:01:58.428321  215122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0921 22:01:58.438155  215122 system_svc.go:56] duration metric: took 9.862878ms WaitForService to wait for kubelet.
	I0921 22:01:58.438185  215122 kubeadm.go:573] duration metric: took 3.404981518s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0921 22:01:58.438203  215122 node_conditions.go:102] verifying NodePressure condition ...
	I0921 22:01:58.626287  215122 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0921 22:01:58.626314  215122 node_conditions.go:123] node cpu capacity is 8
	I0921 22:01:58.626326  215122 node_conditions.go:105] duration metric: took 188.11801ms to run NodePressure ...
	I0921 22:01:58.626336  215122 start.go:216] waiting for startup goroutines ...
	I0921 22:01:58.669164  215122 start.go:506] kubectl: 1.25.2, cluster: 1.25.2 (minor skew: 0)
	I0921 22:01:58.671666  215122 out.go:177] * Done! kubectl is now configured to use "bridge-20220921215523-10174" cluster and "default" namespace by default
	I0921 22:01:54.795644  203160 pod_ready.go:102] pod "coredns-565d847f94-hsd8z" in "kube-system" namespace has status "Ready":"False"
	I0921 22:01:56.796027  203160 pod_ready.go:102] pod "coredns-565d847f94-hsd8z" in "kube-system" namespace has status "Ready":"False"
	I0921 22:01:59.295611  203160 pod_ready.go:102] pod "coredns-565d847f94-hsd8z" in "kube-system" namespace has status "Ready":"False"
	I0921 22:01:56.284041  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:01:58.783815  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:02:01.295691  203160 pod_ready.go:102] pod "coredns-565d847f94-hsd8z" in "kube-system" namespace has status "Ready":"False"
	I0921 22:02:03.295961  203160 pod_ready.go:102] pod "coredns-565d847f94-hsd8z" in "kube-system" namespace has status "Ready":"False"
	I0921 22:02:00.783923  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:02:03.283561  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:02:05.795975  203160 pod_ready.go:102] pod "coredns-565d847f94-hsd8z" in "kube-system" namespace has status "Ready":"False"
	I0921 22:02:07.796658  203160 pod_ready.go:102] pod "coredns-565d847f94-hsd8z" in "kube-system" namespace has status "Ready":"False"
	I0921 22:02:05.784837  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:02:07.785470  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:02:10.296146  203160 pod_ready.go:102] pod "coredns-565d847f94-hsd8z" in "kube-system" namespace has status "Ready":"False"
	I0921 22:02:12.795313  203160 pod_ready.go:102] pod "coredns-565d847f94-hsd8z" in "kube-system" namespace has status "Ready":"False"
	I0921 22:02:10.283603  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:02:12.785065  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:02:14.785360  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:02:14.795662  203160 pod_ready.go:102] pod "coredns-565d847f94-hsd8z" in "kube-system" namespace has status "Ready":"False"
	I0921 22:02:17.295701  203160 pod_ready.go:102] pod "coredns-565d847f94-hsd8z" in "kube-system" namespace has status "Ready":"False"
	I0921 22:02:19.296052  203160 pod_ready.go:102] pod "coredns-565d847f94-hsd8z" in "kube-system" namespace has status "Ready":"False"
	I0921 22:02:16.785760  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:02:18.786632  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:02:21.795289  203160 pod_ready.go:102] pod "coredns-565d847f94-hsd8z" in "kube-system" namespace has status "Ready":"False"
	I0921 22:02:23.795532  203160 pod_ready.go:102] pod "coredns-565d847f94-hsd8z" in "kube-system" namespace has status "Ready":"False"
	I0921 22:02:21.282907  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:02:23.283162  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:02:26.295643  203160 pod_ready.go:102] pod "coredns-565d847f94-hsd8z" in "kube-system" namespace has status "Ready":"False"
	I0921 22:02:28.795800  203160 pod_ready.go:102] pod "coredns-565d847f94-hsd8z" in "kube-system" namespace has status "Ready":"False"
	I0921 22:02:25.283561  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:02:27.284451  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:02:29.782771  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:02:31.295448  203160 pod_ready.go:102] pod "coredns-565d847f94-hsd8z" in "kube-system" namespace has status "Ready":"False"
	I0921 22:02:33.295517  203160 pod_ready.go:102] pod "coredns-565d847f94-hsd8z" in "kube-system" namespace has status "Ready":"False"
	I0921 22:02:31.785078  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:02:34.283276  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:02:37.018351  163433 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0921 22:02:37.018524  163433 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	I0921 22:02:37.021344  163433 kubeadm.go:317] [init] Using Kubernetes version: v1.25.2
	I0921 22:02:37.021412  163433 kubeadm.go:317] [preflight] Running pre-flight checks
	I0921 22:02:37.021521  163433 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I0921 22:02:37.021617  163433 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1017-gcp
	I0921 22:02:37.021677  163433 kubeadm.go:317] OS: Linux
	I0921 22:02:37.021750  163433 kubeadm.go:317] CGROUPS_CPU: enabled
	I0921 22:02:37.021830  163433 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I0921 22:02:37.021902  163433 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I0921 22:02:37.021973  163433 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I0921 22:02:37.022047  163433 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I0921 22:02:37.022121  163433 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I0921 22:02:37.022188  163433 kubeadm.go:317] CGROUPS_PIDS: enabled
	I0921 22:02:37.022253  163433 kubeadm.go:317] CGROUPS_HUGETLB: enabled
	I0921 22:02:37.022318  163433 kubeadm.go:317] CGROUPS_BLKIO: enabled
	I0921 22:02:37.022421  163433 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0921 22:02:37.022566  163433 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0921 22:02:37.022728  163433 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0921 22:02:37.022848  163433 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0921 22:02:37.025283  163433 out.go:204]   - Generating certificates and keys ...
	I0921 22:02:37.025385  163433 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0921 22:02:37.025475  163433 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0921 22:02:37.025582  163433 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0921 22:02:37.025662  163433 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I0921 22:02:37.025742  163433 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I0921 22:02:37.025826  163433 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I0921 22:02:37.025908  163433 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I0921 22:02:37.025970  163433 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I0921 22:02:37.026043  163433 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0921 22:02:37.026115  163433 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0921 22:02:37.026150  163433 kubeadm.go:317] [certs] Using the existing "sa" key
	I0921 22:02:37.026236  163433 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0921 22:02:37.026314  163433 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0921 22:02:37.026388  163433 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0921 22:02:37.026472  163433 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0921 22:02:37.026545  163433 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0921 22:02:37.026684  163433 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0921 22:02:37.026792  163433 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0921 22:02:37.026839  163433 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I0921 22:02:37.026895  163433 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0921 22:02:37.028542  163433 out.go:204]   - Booting up control plane ...
	I0921 22:02:37.028642  163433 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0921 22:02:37.028731  163433 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0921 22:02:37.028833  163433 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0921 22:02:37.028952  163433 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0921 22:02:37.029138  163433 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0921 22:02:37.029208  163433 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
	I0921 22:02:37.029298  163433 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0921 22:02:37.029557  163433 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0921 22:02:37.029641  163433 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0921 22:02:37.029848  163433 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0921 22:02:37.029934  163433 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0921 22:02:37.030147  163433 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0921 22:02:37.030218  163433 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0921 22:02:37.030380  163433 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0921 22:02:37.030466  163433 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0921 22:02:37.030646  163433 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0921 22:02:37.030656  163433 kubeadm.go:317] 
	I0921 22:02:37.030710  163433 kubeadm.go:317] Unfortunately, an error has occurred:
	I0921 22:02:37.030783  163433 kubeadm.go:317] 	timed out waiting for the condition
	I0921 22:02:37.030793  163433 kubeadm.go:317] 
	I0921 22:02:37.030849  163433 kubeadm.go:317] This error is likely caused by:
	I0921 22:02:37.030900  163433 kubeadm.go:317] 	- The kubelet is not running
	I0921 22:02:37.031052  163433 kubeadm.go:317] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0921 22:02:37.031070  163433 kubeadm.go:317] 
	I0921 22:02:37.031191  163433 kubeadm.go:317] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0921 22:02:37.031243  163433 kubeadm.go:317] 	- 'systemctl status kubelet'
	I0921 22:02:37.031297  163433 kubeadm.go:317] 	- 'journalctl -xeu kubelet'
	I0921 22:02:37.031308  163433 kubeadm.go:317] 
	I0921 22:02:37.031422  163433 kubeadm.go:317] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0921 22:02:37.031534  163433 kubeadm.go:317] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0921 22:02:37.031667  163433 kubeadm.go:317] Here is one example how you may list all running Kubernetes containers by using crictl:
	I0921 22:02:37.031810  163433 kubeadm.go:317] 	- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
	I0921 22:02:37.031929  163433 kubeadm.go:317] 	Once you have found the failing container, you can inspect its logs with:
	I0921 22:02:37.032051  163433 kubeadm.go:317] 	- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	W0921 22:02:37.032372  163433 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.25.2
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1017-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W0921 22:00:40.923939    8214 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1017-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0921 22:02:37.032427  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0921 22:02:38.854856  163433 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.822402127s)
	I0921 22:02:38.854917  163433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0921 22:02:38.864392  163433 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0921 22:02:38.864449  163433 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0921 22:02:38.871143  163433 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0921 22:02:38.871179  163433 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0921 22:02:38.909279  163433 kubeadm.go:317] [init] Using Kubernetes version: v1.25.2
	I0921 22:02:38.909365  163433 kubeadm.go:317] [preflight] Running pre-flight checks
	I0921 22:02:38.936932  163433 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I0921 22:02:38.937031  163433 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1017-gcp
	I0921 22:02:38.937063  163433 kubeadm.go:317] OS: Linux
	I0921 22:02:38.937103  163433 kubeadm.go:317] CGROUPS_CPU: enabled
	I0921 22:02:38.937150  163433 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I0921 22:02:38.937239  163433 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I0921 22:02:38.937318  163433 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I0921 22:02:38.937390  163433 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I0921 22:02:38.937481  163433 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I0921 22:02:38.937544  163433 kubeadm.go:317] CGROUPS_PIDS: enabled
	I0921 22:02:38.937595  163433 kubeadm.go:317] CGROUPS_HUGETLB: enabled
	I0921 22:02:38.937662  163433 kubeadm.go:317] CGROUPS_BLKIO: enabled
	I0921 22:02:39.000511  163433 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0921 22:02:39.000640  163433 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0921 22:02:39.000796  163433 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0921 22:02:39.116829  163433 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0921 22:02:35.295884  203160 pod_ready.go:102] pod "coredns-565d847f94-hsd8z" in "kube-system" namespace has status "Ready":"False"
	I0921 22:02:37.296675  203160 pod_ready.go:102] pod "coredns-565d847f94-hsd8z" in "kube-system" namespace has status "Ready":"False"
	I0921 22:02:36.784618  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:02:38.785789  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:02:39.119686  163433 out.go:204]   - Generating certificates and keys ...
	I0921 22:02:39.119846  163433 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0921 22:02:39.119930  163433 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0921 22:02:39.120020  163433 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0921 22:02:39.120098  163433 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I0921 22:02:39.120195  163433 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I0921 22:02:39.120267  163433 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I0921 22:02:39.120352  163433 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I0921 22:02:39.120437  163433 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I0921 22:02:39.120536  163433 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0921 22:02:39.120627  163433 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0921 22:02:39.120686  163433 kubeadm.go:317] [certs] Using the existing "sa" key
	I0921 22:02:39.120764  163433 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0921 22:02:39.214860  163433 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0921 22:02:39.365850  163433 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0921 22:02:39.700601  163433 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0921 22:02:40.053396  163433 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0921 22:02:40.065917  163433 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0921 22:02:40.066776  163433 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0921 22:02:40.066842  163433 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I0921 22:02:40.151802  163433 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0921 22:02:40.154174  163433 out.go:204]   - Booting up control plane ...
	I0921 22:02:40.154312  163433 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0921 22:02:40.154475  163433 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0921 22:02:40.155428  163433 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0921 22:02:40.156385  163433 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0921 22:02:40.159229  163433 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0921 22:02:39.796175  203160 pod_ready.go:102] pod "coredns-565d847f94-hsd8z" in "kube-system" namespace has status "Ready":"False"
	I0921 22:02:41.796249  203160 pod_ready.go:102] pod "coredns-565d847f94-hsd8z" in "kube-system" namespace has status "Ready":"False"
	I0921 22:02:44.295204  203160 pod_ready.go:102] pod "coredns-565d847f94-hsd8z" in "kube-system" namespace has status "Ready":"False"
	I0921 22:02:41.282949  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:02:43.283158  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:02:46.295601  203160 pod_ready.go:102] pod "coredns-565d847f94-hsd8z" in "kube-system" namespace has status "Ready":"False"
	I0921 22:02:48.795781  203160 pod_ready.go:102] pod "coredns-565d847f94-hsd8z" in "kube-system" namespace has status "Ready":"False"
	I0921 22:02:45.783545  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:02:48.283437  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:02:51.295524  203160 pod_ready.go:102] pod "coredns-565d847f94-hsd8z" in "kube-system" namespace has status "Ready":"False"
	I0921 22:02:53.795526  203160 pod_ready.go:102] pod "coredns-565d847f94-hsd8z" in "kube-system" namespace has status "Ready":"False"
	I0921 22:02:50.283654  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:02:52.785091  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:02:54.785272  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:02:56.295265  203160 pod_ready.go:102] pod "coredns-565d847f94-hsd8z" in "kube-system" namespace has status "Ready":"False"
	I0921 22:02:58.296043  203160 pod_ready.go:102] pod "coredns-565d847f94-hsd8z" in "kube-system" namespace has status "Ready":"False"
	I0921 22:02:57.286226  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:02:59.784817  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:03:00.796016  203160 pod_ready.go:102] pod "coredns-565d847f94-hsd8z" in "kube-system" namespace has status "Ready":"False"
	I0921 22:03:03.295516  203160 pod_ready.go:102] pod "coredns-565d847f94-hsd8z" in "kube-system" namespace has status "Ready":"False"
	I0921 22:03:01.785748  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:03:04.283055  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:03:05.796132  203160 pod_ready.go:102] pod "coredns-565d847f94-hsd8z" in "kube-system" namespace has status "Ready":"False"
	I0921 22:03:08.295230  203160 pod_ready.go:102] pod "coredns-565d847f94-hsd8z" in "kube-system" namespace has status "Ready":"False"
	I0921 22:03:06.283163  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:03:08.785308  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:03:10.295605  203160 pod_ready.go:102] pod "coredns-565d847f94-hsd8z" in "kube-system" namespace has status "Ready":"False"
	I0921 22:03:12.794717  203160 pod_ready.go:102] pod "coredns-565d847f94-hsd8z" in "kube-system" namespace has status "Ready":"False"
	I0921 22:03:11.283557  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:03:13.784480  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:03:14.795958  203160 pod_ready.go:102] pod "coredns-565d847f94-hsd8z" in "kube-system" namespace has status "Ready":"False"
	I0921 22:03:17.295388  203160 pod_ready.go:102] pod "coredns-565d847f94-hsd8z" in "kube-system" namespace has status "Ready":"False"
	I0921 22:03:15.784531  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:03:17.785041  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:03:19.785311  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:03:20.159558  163433 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
	I0921 22:03:20.159897  163433 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0921 22:03:20.160149  163433 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0921 22:03:19.796040  203160 pod_ready.go:102] pod "coredns-565d847f94-hsd8z" in "kube-system" namespace has status "Ready":"False"
	I0921 22:03:22.295383  203160 pod_ready.go:102] pod "coredns-565d847f94-hsd8z" in "kube-system" namespace has status "Ready":"False"
	I0921 22:03:22.282513  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:03:24.282757  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:03:25.160511  163433 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0921 22:03:25.160744  163433 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0921 22:03:24.795802  203160 pod_ready.go:102] pod "coredns-565d847f94-hsd8z" in "kube-system" namespace has status "Ready":"False"
	I0921 22:03:27.296018  203160 pod_ready.go:102] pod "coredns-565d847f94-hsd8z" in "kube-system" namespace has status "Ready":"False"
	I0921 22:03:26.282981  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:03:28.283100  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:03:29.795884  203160 pod_ready.go:102] pod "coredns-565d847f94-hsd8z" in "kube-system" namespace has status "Ready":"False"
	I0921 22:03:31.796035  203160 pod_ready.go:102] pod "coredns-565d847f94-hsd8z" in "kube-system" namespace has status "Ready":"False"
	I0921 22:03:34.295079  203160 pod_ready.go:102] pod "coredns-565d847f94-hsd8z" in "kube-system" namespace has status "Ready":"False"
	I0921 22:03:30.785240  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:03:33.283585  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:03:35.160837  163433 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0921 22:03:35.161107  163433 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0921 22:03:36.297275  203160 pod_ready.go:102] pod "coredns-565d847f94-hsd8z" in "kube-system" namespace has status "Ready":"False"
	I0921 22:03:38.795043  203160 pod_ready.go:102] pod "coredns-565d847f94-hsd8z" in "kube-system" namespace has status "Ready":"False"
	I0921 22:03:35.784442  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:03:37.784725  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:03:39.784865  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:03:41.295402  203160 pod_ready.go:102] pod "coredns-565d847f94-hsd8z" in "kube-system" namespace has status "Ready":"False"
	I0921 22:03:43.295511  203160 pod_ready.go:102] pod "coredns-565d847f94-hsd8z" in "kube-system" namespace has status "Ready":"False"
	I0921 22:03:42.282777  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:03:44.282942  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:03:45.795988  203160 pod_ready.go:102] pod "coredns-565d847f94-hsd8z" in "kube-system" namespace has status "Ready":"False"
	I0921 22:03:48.295486  203160 pod_ready.go:102] pod "coredns-565d847f94-hsd8z" in "kube-system" namespace has status "Ready":"False"
	I0921 22:03:46.783254  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:03:49.283243  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:03:50.295629  203160 pod_ready.go:102] pod "coredns-565d847f94-hsd8z" in "kube-system" namespace has status "Ready":"False"
	I0921 22:03:52.794913  203160 pod_ready.go:102] pod "coredns-565d847f94-hsd8z" in "kube-system" namespace has status "Ready":"False"
	I0921 22:03:51.783288  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:03:53.784868  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:03:55.162112  163433 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0921 22:03:55.162307  163433 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0921 22:03:54.796081  203160 pod_ready.go:102] pod "coredns-565d847f94-hsd8z" in "kube-system" namespace has status "Ready":"False"
	I0921 22:03:57.296420  203160 pod_ready.go:102] pod "coredns-565d847f94-hsd8z" in "kube-system" namespace has status "Ready":"False"
	I0921 22:03:55.786215  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:03:58.282766  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:03:59.795300  203160 pod_ready.go:102] pod "coredns-565d847f94-hsd8z" in "kube-system" namespace has status "Ready":"False"
	I0921 22:04:01.796087  203160 pod_ready.go:102] pod "coredns-565d847f94-hsd8z" in "kube-system" namespace has status "Ready":"False"
	I0921 22:04:04.295001  203160 pod_ready.go:102] pod "coredns-565d847f94-hsd8z" in "kube-system" namespace has status "Ready":"False"
	I0921 22:04:00.782996  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:04:02.784677  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:04:06.295897  203160 pod_ready.go:102] pod "coredns-565d847f94-hsd8z" in "kube-system" namespace has status "Ready":"False"
	I0921 22:04:08.795477  203160 pod_ready.go:102] pod "coredns-565d847f94-hsd8z" in "kube-system" namespace has status "Ready":"False"
	I0921 22:04:05.285308  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:04:07.783365  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:04:09.785992  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:04:11.295345  203160 pod_ready.go:102] pod "coredns-565d847f94-hsd8z" in "kube-system" namespace has status "Ready":"False"
	I0921 22:04:13.295670  203160 pod_ready.go:102] pod "coredns-565d847f94-hsd8z" in "kube-system" namespace has status "Ready":"False"
	I0921 22:04:12.282740  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:04:14.283416  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:04:15.795912  203160 pod_ready.go:102] pod "coredns-565d847f94-hsd8z" in "kube-system" namespace has status "Ready":"False"
	I0921 22:04:18.295246  203160 pod_ready.go:102] pod "coredns-565d847f94-hsd8z" in "kube-system" namespace has status "Ready":"False"
	I0921 22:04:16.785048  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:04:19.283886  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:04:20.794913  203160 pod_ready.go:102] pod "coredns-565d847f94-hsd8z" in "kube-system" namespace has status "Ready":"False"
	I0921 22:04:22.796143  203160 pod_ready.go:102] pod "coredns-565d847f94-hsd8z" in "kube-system" namespace has status "Ready":"False"
	I0921 22:04:21.783586  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:04:24.283127  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:04:25.296005  203160 pod_ready.go:102] pod "coredns-565d847f94-hsd8z" in "kube-system" namespace has status "Ready":"False"
	I0921 22:04:27.296540  203160 pod_ready.go:102] pod "coredns-565d847f94-hsd8z" in "kube-system" namespace has status "Ready":"False"
	I0921 22:04:26.283486  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:04:27.790035  199263 pod_ready.go:81] duration metric: took 4m0.046661281s waiting for pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace to be "Ready" ...
	E0921 22:04:27.790062  199263 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0921 22:04:27.790073  199263 pod_ready.go:78] waiting up to 5m0s for pod "calico-node-lcxq5" in "kube-system" namespace to be "Ready" ...
	I0921 22:04:29.800454  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:04:29.795939  203160 pod_ready.go:102] pod "coredns-565d847f94-hsd8z" in "kube-system" namespace has status "Ready":"False"
	I0921 22:04:32.295828  203160 pod_ready.go:102] pod "coredns-565d847f94-hsd8z" in "kube-system" namespace has status "Ready":"False"
	I0921 22:04:34.296018  203160 pod_ready.go:102] pod "coredns-565d847f94-hsd8z" in "kube-system" namespace has status "Ready":"False"
	I0921 22:04:32.300614  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:04:34.301269  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:04:35.163184  163433 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0921 22:04:35.163483  163433 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0921 22:04:35.163518  163433 kubeadm.go:317] 
	I0921 22:04:35.163584  163433 kubeadm.go:317] Unfortunately, an error has occurred:
	I0921 22:04:35.163648  163433 kubeadm.go:317] 	timed out waiting for the condition
	I0921 22:04:35.163661  163433 kubeadm.go:317] 
	I0921 22:04:35.163710  163433 kubeadm.go:317] This error is likely caused by:
	I0921 22:04:35.163796  163433 kubeadm.go:317] 	- The kubelet is not running
	I0921 22:04:35.163928  163433 kubeadm.go:317] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0921 22:04:35.163937  163433 kubeadm.go:317] 
	I0921 22:04:35.164060  163433 kubeadm.go:317] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0921 22:04:35.164118  163433 kubeadm.go:317] 	- 'systemctl status kubelet'
	I0921 22:04:35.164157  163433 kubeadm.go:317] 	- 'journalctl -xeu kubelet'
	I0921 22:04:35.164173  163433 kubeadm.go:317] 
	I0921 22:04:35.164320  163433 kubeadm.go:317] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0921 22:04:35.164436  163433 kubeadm.go:317] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0921 22:04:35.164551  163433 kubeadm.go:317] Here is one example how you may list all running Kubernetes containers by using crictl:
	I0921 22:04:35.164687  163433 kubeadm.go:317] 	- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
	I0921 22:04:35.164801  163433 kubeadm.go:317] 	Once you have found the failing container, you can inspect its logs with:
	I0921 22:04:35.164918  163433 kubeadm.go:317] 	- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	I0921 22:04:35.166104  163433 kubeadm.go:317] W0921 22:02:38.904023   11075 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0921 22:04:35.166318  163433 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1017-gcp\n", err: exit status 1
	I0921 22:04:35.166415  163433 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0921 22:04:35.166492  163433 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0921 22:04:35.166592  163433 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	I0921 22:04:35.166696  163433 kubeadm.go:398] StartCluster complete in 7m57.461301325s
	I0921 22:04:35.166738  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0921 22:04:35.166788  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0921 22:04:35.191228  163433 cri.go:87] found id: ""
	I0921 22:04:35.191254  163433 logs.go:274] 0 containers: []
	W0921 22:04:35.191261  163433 logs.go:276] No container was found matching "kube-apiserver"
	I0921 22:04:35.191272  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0921 22:04:35.191332  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0921 22:04:35.214715  163433 cri.go:87] found id: ""
	I0921 22:04:35.214744  163433 logs.go:274] 0 containers: []
	W0921 22:04:35.214750  163433 logs.go:276] No container was found matching "etcd"
	I0921 22:04:35.214756  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0921 22:04:35.214804  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0921 22:04:35.239162  163433 cri.go:87] found id: ""
	I0921 22:04:35.239186  163433 logs.go:274] 0 containers: []
	W0921 22:04:35.239192  163433 logs.go:276] No container was found matching "coredns"
	I0921 22:04:35.239197  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0921 22:04:35.239252  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0921 22:04:35.263333  163433 cri.go:87] found id: ""
	I0921 22:04:35.263357  163433 logs.go:274] 0 containers: []
	W0921 22:04:35.263363  163433 logs.go:276] No container was found matching "kube-scheduler"
	I0921 22:04:35.263368  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0921 22:04:35.263407  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0921 22:04:35.286321  163433 cri.go:87] found id: ""
	I0921 22:04:35.286347  163433 logs.go:274] 0 containers: []
	W0921 22:04:35.286355  163433 logs.go:276] No container was found matching "kube-proxy"
	I0921 22:04:35.286364  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0921 22:04:35.286426  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0921 22:04:35.314686  163433 cri.go:87] found id: ""
	I0921 22:04:35.314714  163433 logs.go:274] 0 containers: []
	W0921 22:04:35.314722  163433 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0921 22:04:35.314730  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0921 22:04:35.314783  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0921 22:04:35.338110  163433 cri.go:87] found id: ""
	I0921 22:04:35.338142  163433 logs.go:274] 0 containers: []
	W0921 22:04:35.338151  163433 logs.go:276] No container was found matching "storage-provisioner"
	I0921 22:04:35.338160  163433 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0921 22:04:35.338240  163433 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0921 22:04:35.363135  163433 cri.go:87] found id: ""
	I0921 22:04:35.363167  163433 logs.go:274] 0 containers: []
	W0921 22:04:35.363176  163433 logs.go:276] No container was found matching "kube-controller-manager"
	I0921 22:04:35.363188  163433 logs.go:123] Gathering logs for kubelet ...
	I0921 22:04:35.363200  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0921 22:04:35.379811  163433 logs.go:138] Found kubelet problem: Sep 21 22:03:45 kubernetes-upgrade-20220921215522-10174 kubelet[12183]: E0921 22:03:45.374524   12183 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.380296  163433 logs.go:138] Found kubelet problem: Sep 21 22:03:46 kubernetes-upgrade-20220921215522-10174 kubelet[12194]: E0921 22:03:46.123389   12194 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.380707  163433 logs.go:138] Found kubelet problem: Sep 21 22:03:46 kubernetes-upgrade-20220921215522-10174 kubelet[12205]: E0921 22:03:46.873433   12205 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.381121  163433 logs.go:138] Found kubelet problem: Sep 21 22:03:47 kubernetes-upgrade-20220921215522-10174 kubelet[12216]: E0921 22:03:47.622295   12216 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.381546  163433 logs.go:138] Found kubelet problem: Sep 21 22:03:48 kubernetes-upgrade-20220921215522-10174 kubelet[12227]: E0921 22:03:48.372753   12227 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.381922  163433 logs.go:138] Found kubelet problem: Sep 21 22:03:49 kubernetes-upgrade-20220921215522-10174 kubelet[12238]: E0921 22:03:49.121733   12238 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.382304  163433 logs.go:138] Found kubelet problem: Sep 21 22:03:49 kubernetes-upgrade-20220921215522-10174 kubelet[12249]: E0921 22:03:49.874411   12249 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.382688  163433 logs.go:138] Found kubelet problem: Sep 21 22:03:50 kubernetes-upgrade-20220921215522-10174 kubelet[12260]: E0921 22:03:50.621954   12260 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.383079  163433 logs.go:138] Found kubelet problem: Sep 21 22:03:51 kubernetes-upgrade-20220921215522-10174 kubelet[12271]: E0921 22:03:51.373297   12271 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.383463  163433 logs.go:138] Found kubelet problem: Sep 21 22:03:52 kubernetes-upgrade-20220921215522-10174 kubelet[12282]: E0921 22:03:52.124808   12282 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.383909  163433 logs.go:138] Found kubelet problem: Sep 21 22:03:52 kubernetes-upgrade-20220921215522-10174 kubelet[12293]: E0921 22:03:52.872464   12293 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.384310  163433 logs.go:138] Found kubelet problem: Sep 21 22:03:53 kubernetes-upgrade-20220921215522-10174 kubelet[12304]: E0921 22:03:53.621146   12304 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.384694  163433 logs.go:138] Found kubelet problem: Sep 21 22:03:54 kubernetes-upgrade-20220921215522-10174 kubelet[12315]: E0921 22:03:54.372737   12315 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.385094  163433 logs.go:138] Found kubelet problem: Sep 21 22:03:55 kubernetes-upgrade-20220921215522-10174 kubelet[12326]: E0921 22:03:55.122043   12326 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.385483  163433 logs.go:138] Found kubelet problem: Sep 21 22:03:55 kubernetes-upgrade-20220921215522-10174 kubelet[12337]: E0921 22:03:55.873129   12337 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.385896  163433 logs.go:138] Found kubelet problem: Sep 21 22:03:56 kubernetes-upgrade-20220921215522-10174 kubelet[12349]: E0921 22:03:56.622479   12349 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.386331  163433 logs.go:138] Found kubelet problem: Sep 21 22:03:57 kubernetes-upgrade-20220921215522-10174 kubelet[12360]: E0921 22:03:57.373797   12360 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.386778  163433 logs.go:138] Found kubelet problem: Sep 21 22:03:58 kubernetes-upgrade-20220921215522-10174 kubelet[12371]: E0921 22:03:58.123173   12371 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.387176  163433 logs.go:138] Found kubelet problem: Sep 21 22:03:58 kubernetes-upgrade-20220921215522-10174 kubelet[12383]: E0921 22:03:58.872655   12383 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.387549  163433 logs.go:138] Found kubelet problem: Sep 21 22:03:59 kubernetes-upgrade-20220921215522-10174 kubelet[12394]: E0921 22:03:59.624915   12394 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.387984  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:00 kubernetes-upgrade-20220921215522-10174 kubelet[12405]: E0921 22:04:00.373541   12405 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.388383  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:01 kubernetes-upgrade-20220921215522-10174 kubelet[12416]: E0921 22:04:01.125035   12416 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.388766  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:01 kubernetes-upgrade-20220921215522-10174 kubelet[12428]: E0921 22:04:01.874353   12428 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.389209  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:02 kubernetes-upgrade-20220921215522-10174 kubelet[12439]: E0921 22:04:02.624609   12439 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.389617  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:03 kubernetes-upgrade-20220921215522-10174 kubelet[12450]: E0921 22:04:03.374384   12450 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.390085  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:04 kubernetes-upgrade-20220921215522-10174 kubelet[12461]: E0921 22:04:04.121121   12461 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.390489  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:04 kubernetes-upgrade-20220921215522-10174 kubelet[12471]: E0921 22:04:04.874129   12471 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.390875  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:05 kubernetes-upgrade-20220921215522-10174 kubelet[12482]: E0921 22:04:05.622119   12482 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.391301  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:06 kubernetes-upgrade-20220921215522-10174 kubelet[12494]: E0921 22:04:06.373783   12494 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.391702  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:07 kubernetes-upgrade-20220921215522-10174 kubelet[12504]: E0921 22:04:07.122518   12504 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.392128  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:07 kubernetes-upgrade-20220921215522-10174 kubelet[12516]: E0921 22:04:07.873670   12516 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.392780  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:08 kubernetes-upgrade-20220921215522-10174 kubelet[12527]: E0921 22:04:08.622374   12527 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.393441  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:09 kubernetes-upgrade-20220921215522-10174 kubelet[12539]: E0921 22:04:09.380221   12539 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.393896  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:10 kubernetes-upgrade-20220921215522-10174 kubelet[12549]: E0921 22:04:10.123024   12549 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.394289  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:10 kubernetes-upgrade-20220921215522-10174 kubelet[12560]: E0921 22:04:10.871810   12560 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.394689  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:11 kubernetes-upgrade-20220921215522-10174 kubelet[12570]: E0921 22:04:11.623184   12570 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.395075  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:12 kubernetes-upgrade-20220921215522-10174 kubelet[12581]: E0921 22:04:12.373639   12581 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.395484  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:13 kubernetes-upgrade-20220921215522-10174 kubelet[12592]: E0921 22:04:13.122993   12592 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.395916  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:13 kubernetes-upgrade-20220921215522-10174 kubelet[12602]: E0921 22:04:13.874559   12602 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.396306  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:14 kubernetes-upgrade-20220921215522-10174 kubelet[12612]: E0921 22:04:14.623058   12612 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.396713  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:15 kubernetes-upgrade-20220921215522-10174 kubelet[12623]: E0921 22:04:15.372885   12623 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.397101  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:16 kubernetes-upgrade-20220921215522-10174 kubelet[12633]: E0921 22:04:16.123144   12633 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.397495  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:16 kubernetes-upgrade-20220921215522-10174 kubelet[12644]: E0921 22:04:16.872120   12644 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.397886  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:17 kubernetes-upgrade-20220921215522-10174 kubelet[12655]: E0921 22:04:17.621957   12655 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.398268  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:18 kubernetes-upgrade-20220921215522-10174 kubelet[12666]: E0921 22:04:18.372467   12666 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.398694  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:19 kubernetes-upgrade-20220921215522-10174 kubelet[12676]: E0921 22:04:19.122753   12676 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.399092  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:19 kubernetes-upgrade-20220921215522-10174 kubelet[12687]: E0921 22:04:19.876067   12687 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.399483  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:20 kubernetes-upgrade-20220921215522-10174 kubelet[12698]: E0921 22:04:20.623954   12698 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.399925  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:21 kubernetes-upgrade-20220921215522-10174 kubelet[12709]: E0921 22:04:21.373994   12709 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.400440  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:22 kubernetes-upgrade-20220921215522-10174 kubelet[12720]: E0921 22:04:22.124039   12720 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.400894  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:22 kubernetes-upgrade-20220921215522-10174 kubelet[12730]: E0921 22:04:22.872700   12730 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.401275  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:23 kubernetes-upgrade-20220921215522-10174 kubelet[12741]: E0921 22:04:23.620191   12741 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.401664  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:24 kubernetes-upgrade-20220921215522-10174 kubelet[12752]: E0921 22:04:24.373902   12752 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.402051  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:25 kubernetes-upgrade-20220921215522-10174 kubelet[12763]: E0921 22:04:25.123958   12763 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.402430  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:25 kubernetes-upgrade-20220921215522-10174 kubelet[12773]: E0921 22:04:25.874249   12773 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.402818  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:26 kubernetes-upgrade-20220921215522-10174 kubelet[12786]: E0921 22:04:26.623403   12786 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.403198  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:27 kubernetes-upgrade-20220921215522-10174 kubelet[12797]: E0921 22:04:27.375008   12797 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.403573  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:28 kubernetes-upgrade-20220921215522-10174 kubelet[12808]: E0921 22:04:28.122905   12808 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.403988  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:28 kubernetes-upgrade-20220921215522-10174 kubelet[12819]: E0921 22:04:28.871801   12819 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.404363  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:29 kubernetes-upgrade-20220921215522-10174 kubelet[12830]: E0921 22:04:29.622004   12830 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.404759  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:30 kubernetes-upgrade-20220921215522-10174 kubelet[12842]: E0921 22:04:30.373730   12842 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.405157  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:31 kubernetes-upgrade-20220921215522-10174 kubelet[12854]: E0921 22:04:31.121895   12854 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.405540  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:31 kubernetes-upgrade-20220921215522-10174 kubelet[12865]: E0921 22:04:31.872824   12865 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.405921  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:32 kubernetes-upgrade-20220921215522-10174 kubelet[12876]: E0921 22:04:32.622571   12876 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.406306  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:33 kubernetes-upgrade-20220921215522-10174 kubelet[12887]: E0921 22:04:33.373474   12887 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.406697  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:34 kubernetes-upgrade-20220921215522-10174 kubelet[12898]: E0921 22:04:34.121999   12898 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	W0921 22:04:35.407098  163433 logs.go:138] Found kubelet problem: Sep 21 22:04:34 kubernetes-upgrade-20220921215522-10174 kubelet[12909]: E0921 22:04:34.878168   12909 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0921 22:04:35.407226  163433 logs.go:123] Gathering logs for dmesg ...
	I0921 22:04:35.407241  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0921 22:04:35.428361  163433 logs.go:123] Gathering logs for describe nodes ...
	I0921 22:04:35.428401  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0921 22:04:35.484445  163433 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0921 22:04:35.484475  163433 logs.go:123] Gathering logs for containerd ...
	I0921 22:04:35.484488  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0921 22:04:35.545769  163433 logs.go:123] Gathering logs for container status ...
	I0921 22:04:35.545802  163433 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0921 22:04:35.571634  163433 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.25.2
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1017-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W0921 22:02:38.904023   11075 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1017-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0921 22:04:35.571680  163433 out.go:239] * 
	W0921 22:04:35.571931  163433 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.25.2
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1017-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W0921 22:02:38.904023   11075 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1017-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0921 22:04:35.571967  163433 out.go:239] * 
	W0921 22:04:35.572762  163433 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0921 22:04:35.575608  163433 out.go:177] X Problems detected in kubelet:
	I0921 22:04:35.576957  163433 out.go:177]   Sep 21 22:03:45 kubernetes-upgrade-20220921215522-10174 kubelet[12183]: E0921 22:03:45.374524   12183 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0921 22:04:35.578291  163433 out.go:177]   Sep 21 22:03:46 kubernetes-upgrade-20220921215522-10174 kubelet[12194]: E0921 22:03:46.123389   12194 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0921 22:04:35.579626  163433 out.go:177]   Sep 21 22:03:46 kubernetes-upgrade-20220921215522-10174 kubelet[12205]: E0921 22:03:46.873433   12205 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	I0921 22:04:35.583058  163433 out.go:177] 
	W0921 22:04:35.584539  163433 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.25.2
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1017-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
	
	stderr:
	W0921 22:02:38.904023   11075 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1017-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0921 22:04:35.584658  163433 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0921 22:04:35.584719  163433 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0921 22:04:35.586989  163433 out.go:177] 
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID
	
	* 
	* ==> containerd <==
	* -- Logs begin at Wed 2022-09-21 21:56:11 UTC, end at Wed 2022-09-21 22:04:36 UTC. --
	Sep 21 22:02:38 kubernetes-upgrade-20220921215522-10174 containerd[485]: time="2022-09-21T22:02:38.646790703Z" level=error msg="StopPodSandbox for \"format\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"format\\\"\": not found"
	Sep 21 22:02:38 kubernetes-upgrade-20220921215522-10174 containerd[485]: time="2022-09-21T22:02:38.664755311Z" level=info msg="StopPodSandbox for \"format\\\"\""
	Sep 21 22:02:38 kubernetes-upgrade-20220921215522-10174 containerd[485]: time="2022-09-21T22:02:38.664806600Z" level=error msg="StopPodSandbox for \"format\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"format\\\"\": not found"
	Sep 21 22:02:38 kubernetes-upgrade-20220921215522-10174 containerd[485]: time="2022-09-21T22:02:38.682342993Z" level=info msg="StopPodSandbox for \"format\\\"\""
	Sep 21 22:02:38 kubernetes-upgrade-20220921215522-10174 containerd[485]: time="2022-09-21T22:02:38.682393627Z" level=error msg="StopPodSandbox for \"format\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"format\\\"\": not found"
	Sep 21 22:02:38 kubernetes-upgrade-20220921215522-10174 containerd[485]: time="2022-09-21T22:02:38.699043301Z" level=info msg="StopPodSandbox for \"endpoint=\\\"/run/containerd/containerd.sock\\\"\""
	Sep 21 22:02:38 kubernetes-upgrade-20220921215522-10174 containerd[485]: time="2022-09-21T22:02:38.699124093Z" level=error msg="StopPodSandbox for \"endpoint=\\\"/run/containerd/containerd.sock\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"endpoint=\\\"/run/containerd/containerd.sock\\\"\": not found"
	Sep 21 22:02:38 kubernetes-upgrade-20220921215522-10174 containerd[485]: time="2022-09-21T22:02:38.716535656Z" level=info msg="StopPodSandbox for \"endpoint=\\\"/run/containerd/containerd.sock\\\"\""
	Sep 21 22:02:38 kubernetes-upgrade-20220921215522-10174 containerd[485]: time="2022-09-21T22:02:38.716594187Z" level=error msg="StopPodSandbox for \"endpoint=\\\"/run/containerd/containerd.sock\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"endpoint=\\\"/run/containerd/containerd.sock\\\"\": not found"
	Sep 21 22:02:38 kubernetes-upgrade-20220921215522-10174 containerd[485]: time="2022-09-21T22:02:38.732898902Z" level=info msg="StopPodSandbox for \"endpoint=\\\"/run/containerd/containerd.sock\\\"\""
	Sep 21 22:02:38 kubernetes-upgrade-20220921215522-10174 containerd[485]: time="2022-09-21T22:02:38.732955687Z" level=error msg="StopPodSandbox for \"endpoint=\\\"/run/containerd/containerd.sock\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"endpoint=\\\"/run/containerd/containerd.sock\\\"\": not found"
	Sep 21 22:02:38 kubernetes-upgrade-20220921215522-10174 containerd[485]: time="2022-09-21T22:02:38.749905496Z" level=info msg="StopPodSandbox for \"endpoint=\\\"/run/containerd/containerd.sock\\\"\""
	Sep 21 22:02:38 kubernetes-upgrade-20220921215522-10174 containerd[485]: time="2022-09-21T22:02:38.749963734Z" level=error msg="StopPodSandbox for \"endpoint=\\\"/run/containerd/containerd.sock\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"endpoint=\\\"/run/containerd/containerd.sock\\\"\": not found"
	Sep 21 22:02:38 kubernetes-upgrade-20220921215522-10174 containerd[485]: time="2022-09-21T22:02:38.765653033Z" level=info msg="StopPodSandbox for \"endpoint=\\\"/run/containerd/containerd.sock\\\"\""
	Sep 21 22:02:38 kubernetes-upgrade-20220921215522-10174 containerd[485]: time="2022-09-21T22:02:38.765713465Z" level=error msg="StopPodSandbox for \"endpoint=\\\"/run/containerd/containerd.sock\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"endpoint=\\\"/run/containerd/containerd.sock\\\"\": not found"
	Sep 21 22:02:38 kubernetes-upgrade-20220921215522-10174 containerd[485]: time="2022-09-21T22:02:38.782759775Z" level=info msg="StopPodSandbox for \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\""
	Sep 21 22:02:38 kubernetes-upgrade-20220921215522-10174 containerd[485]: time="2022-09-21T22:02:38.782823951Z" level=error msg="StopPodSandbox for \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\": not found"
	Sep 21 22:02:38 kubernetes-upgrade-20220921215522-10174 containerd[485]: time="2022-09-21T22:02:38.800951175Z" level=info msg="StopPodSandbox for \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\""
	Sep 21 22:02:38 kubernetes-upgrade-20220921215522-10174 containerd[485]: time="2022-09-21T22:02:38.801013780Z" level=error msg="StopPodSandbox for \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\": not found"
	Sep 21 22:02:38 kubernetes-upgrade-20220921215522-10174 containerd[485]: time="2022-09-21T22:02:38.816954723Z" level=info msg="StopPodSandbox for \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\""
	Sep 21 22:02:38 kubernetes-upgrade-20220921215522-10174 containerd[485]: time="2022-09-21T22:02:38.817008244Z" level=error msg="StopPodSandbox for \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\": not found"
	Sep 21 22:02:38 kubernetes-upgrade-20220921215522-10174 containerd[485]: time="2022-09-21T22:02:38.833786120Z" level=info msg="StopPodSandbox for \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\""
	Sep 21 22:02:38 kubernetes-upgrade-20220921215522-10174 containerd[485]: time="2022-09-21T22:02:38.833839985Z" level=error msg="StopPodSandbox for \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\": not found"
	Sep 21 22:02:38 kubernetes-upgrade-20220921215522-10174 containerd[485]: time="2022-09-21T22:02:38.849942783Z" level=info msg="StopPodSandbox for \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\""
	Sep 21 22:02:38 kubernetes-upgrade-20220921215522-10174 containerd[485]: time="2022-09-21T22:02:38.849991644Z" level=error msg="StopPodSandbox for \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox \"URL=\\\"unix:///run/containerd/containerd.sock\\\"\": not found"
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +0.000006] ll header: 00000000: ff ff ff ff ff ff 42 92 2a 6c 80 87 08 06
	[  +2.955867] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 42 92 2a 6c 80 87 08 06
	[  +1.015869] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 42 92 2a 6c 80 87 08 06
	[  +1.019929] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 42 92 2a 6c 80 87 08 06
	[ +12.977079] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 42 92 2a 6c 80 87 08 06
	[  +1.009970] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 42 92 2a 6c 80 87 08 06
	[  +1.023922] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 42 92 2a 6c 80 87 08 06
	[  +2.963873] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 42 92 2a 6c 80 87 08 06
	[  +1.035863] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 42 92 2a 6c 80 87 08 06
	[  +1.019908] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 42 92 2a 6c 80 87 08 06
	[  +2.943857] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 42 92 2a 6c 80 87 08 06
	[  +1.027872] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 42 92 2a 6c 80 87 08 06
	[  +1.019949] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 42 92 2a 6c 80 87 08 06
	
	* 
	* ==> kernel <==
	*  22:04:37 up 47 min,  0 users,  load average: 1.85, 2.67, 2.24
	Linux kubernetes-upgrade-20220921215522-10174 5.15.0-1017-gcp #23~20.04.2-Ubuntu SMP Wed Aug 17 02:46:40 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2022-09-21 21:56:11 UTC, end at Wed 2022-09-21 22:04:37 UTC. --
	Sep 21 22:04:34 kubernetes-upgrade-20220921215522-10174 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Sep 21 22:04:34 kubernetes-upgrade-20220921215522-10174 kubelet[12898]: E0921 22:04:34.121999   12898 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	Sep 21 22:04:34 kubernetes-upgrade-20220921215522-10174 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Sep 21 22:04:34 kubernetes-upgrade-20220921215522-10174 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Sep 21 22:04:34 kubernetes-upgrade-20220921215522-10174 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 153.
	Sep 21 22:04:34 kubernetes-upgrade-20220921215522-10174 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Sep 21 22:04:34 kubernetes-upgrade-20220921215522-10174 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Sep 21 22:04:34 kubernetes-upgrade-20220921215522-10174 kubelet[12909]: E0921 22:04:34.878168   12909 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	Sep 21 22:04:34 kubernetes-upgrade-20220921215522-10174 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Sep 21 22:04:34 kubernetes-upgrade-20220921215522-10174 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Sep 21 22:04:35 kubernetes-upgrade-20220921215522-10174 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 154.
	Sep 21 22:04:35 kubernetes-upgrade-20220921215522-10174 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Sep 21 22:04:35 kubernetes-upgrade-20220921215522-10174 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Sep 21 22:04:35 kubernetes-upgrade-20220921215522-10174 kubelet[13056]: E0921 22:04:35.635120   13056 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	Sep 21 22:04:35 kubernetes-upgrade-20220921215522-10174 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Sep 21 22:04:35 kubernetes-upgrade-20220921215522-10174 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Sep 21 22:04:36 kubernetes-upgrade-20220921215522-10174 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 155.
	Sep 21 22:04:36 kubernetes-upgrade-20220921215522-10174 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Sep 21 22:04:36 kubernetes-upgrade-20220921215522-10174 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Sep 21 22:04:36 kubernetes-upgrade-20220921215522-10174 kubelet[13076]: E0921 22:04:36.340866   13076 run.go:74] "command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"
	Sep 21 22:04:36 kubernetes-upgrade-20220921215522-10174 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Sep 21 22:04:36 kubernetes-upgrade-20220921215522-10174 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Sep 21 22:04:37 kubernetes-upgrade-20220921215522-10174 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 156.
	Sep 21 22:04:37 kubernetes-upgrade-20220921215522-10174 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Sep 21 22:04:37 kubernetes-upgrade-20220921215522-10174 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 22:04:37.066160  227518 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-20220921215522-10174 -n kubernetes-upgrade-20220921215522-10174
E0921 22:04:37.249434   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/auto-20220921215523-10174/client.crt: no such file or directory
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-20220921215522-10174 -n kubernetes-upgrade-20220921215522-10174: exit status 2 (375.582999ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "kubernetes-upgrade-20220921215522-10174" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-20220921215522-10174" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-20220921215522-10174

                                                
                                                
=== CONT  TestKubernetesUpgrade
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-20220921215522-10174: (2.046861985s)
--- FAIL: TestKubernetesUpgrade (556.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (528.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p calico-20220921215524-10174 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p calico-20220921215524-10174 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker  --container-runtime=containerd: exit status 80 (8m48.059159746s)

                                                
                                                
-- stdout --
	* [calico-20220921215524-10174] minikube v1.27.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14995
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting control plane node calico-20220921215524-10174 in cluster calico-20220921215524-10174
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* Preparing Kubernetes v1.25.2 on containerd 1.6.8 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring Calico (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0921 21:59:39.823773  199263 out.go:296] Setting OutFile to fd 1 ...
	I0921 21:59:39.824008  199263 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 21:59:39.824023  199263 out.go:309] Setting ErrFile to fd 2...
	I0921 21:59:39.824029  199263 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 21:59:39.824159  199263 root.go:333] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/bin
	I0921 21:59:39.824786  199263 out.go:303] Setting JSON to false
	I0921 21:59:39.826441  199263 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":2531,"bootTime":1663795049,"procs":955,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1017-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0921 21:59:39.826514  199263 start.go:125] virtualization: kvm guest
	I0921 21:59:39.829284  199263 out.go:177] * [calico-20220921215524-10174] minikube v1.27.0 on Ubuntu 20.04 (kvm/amd64)
	I0921 21:59:39.830780  199263 out.go:177]   - MINIKUBE_LOCATION=14995
	I0921 21:59:39.830797  199263 notify.go:214] Checking for updates...
	I0921 21:59:39.832148  199263 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0921 21:59:39.833674  199263 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
	I0921 21:59:39.835221  199263 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube
	I0921 21:59:39.836595  199263 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0921 21:59:39.838330  199263 config.go:180] Loaded profile config "cilium-20220921215524-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.2
	I0921 21:59:39.838466  199263 config.go:180] Loaded profile config "kindnet-20220921215523-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.2
	I0921 21:59:39.838597  199263 config.go:180] Loaded profile config "kubernetes-upgrade-20220921215522-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.2
	I0921 21:59:39.838665  199263 driver.go:365] Setting default libvirt URI to qemu:///system
	I0921 21:59:39.870589  199263 docker.go:137] docker version: linux-20.10.18
	I0921 21:59:39.870693  199263 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 21:59:39.962448  199263 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:true NGoroutines:57 SystemTime:2022-09-21 21:59:39.890339957 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1017-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.18 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0921 21:59:39.962547  199263 docker.go:254] overlay module found
	I0921 21:59:39.964740  199263 out.go:177] * Using the docker driver based on user configuration
	I0921 21:59:39.966003  199263 start.go:284] selected driver: docker
	I0921 21:59:39.966027  199263 start.go:808] validating driver "docker" against <nil>
	I0921 21:59:39.966048  199263 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0921 21:59:39.966883  199263 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 21:59:40.060813  199263 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:true NGoroutines:57 SystemTime:2022-09-21 21:59:39.987748243 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1017-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.18 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0921 21:59:40.060999  199263 start_flags.go:302] no existing cluster config was found, will generate one from the flags 
	I0921 21:59:40.061212  199263 start_flags.go:867] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0921 21:59:40.063370  199263 out.go:177] * Using Docker driver with root privileges
	I0921 21:59:40.064551  199263 cni.go:95] Creating CNI manager for "calico"
	I0921 21:59:40.064571  199263 start_flags.go:311] Found "Calico" CNI - setting NetworkPlugin=cni
	I0921 21:59:40.064601  199263 start_flags.go:316] config:
	{Name:calico-20220921215524-10174 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:calico-20220921215524-10174 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cont
ainerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 21:59:40.066572  199263 out.go:177] * Starting control plane node calico-20220921215524-10174 in cluster calico-20220921215524-10174
	I0921 21:59:40.068284  199263 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0921 21:59:40.069589  199263 out.go:177] * Pulling base image ...
	I0921 21:59:40.070728  199263 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime containerd
	I0921 21:59:40.070768  199263 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.2-containerd-overlay2-amd64.tar.lz4
	I0921 21:59:40.070787  199263 cache.go:57] Caching tarball of preloaded images
	I0921 21:59:40.070764  199263 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	I0921 21:59:40.071042  199263 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0921 21:59:40.071060  199263 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.2 on containerd
	I0921 21:59:40.071156  199263 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/calico-20220921215524-10174/config.json ...
	I0921 21:59:40.071175  199263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/calico-20220921215524-10174/config.json: {Name:mk67b11c67f43149d51485436827c52f0eefb3e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 21:59:40.100339  199263 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon, skipping pull
	I0921 21:59:40.100378  199263 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c exists in daemon, skipping load
	I0921 21:59:40.100400  199263 cache.go:208] Successfully downloaded all kic artifacts
	I0921 21:59:40.100446  199263 start.go:364] acquiring machines lock for calico-20220921215524-10174: {Name:mk5600a2f8707e2d86811dda33539e9b4f3d62ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 21:59:40.100607  199263 start.go:368] acquired machines lock for "calico-20220921215524-10174" in 129.706µs
	I0921 21:59:40.100649  199263 start.go:93] Provisioning new machine with config: &{Name:calico-20220921215524-10174 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:calico-20220921215524-10174 Namespace:default APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0921 21:59:40.100755  199263 start.go:125] createHost starting for "" (driver="docker")
	I0921 21:59:40.103289  199263 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0921 21:59:40.103649  199263 start.go:159] libmachine.API.Create for "calico-20220921215524-10174" (driver="docker")
	I0921 21:59:40.103696  199263 client.go:168] LocalClient.Create starting
	I0921 21:59:40.103818  199263 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem
	I0921 21:59:40.103868  199263 main.go:134] libmachine: Decoding PEM data...
	I0921 21:59:40.103892  199263 main.go:134] libmachine: Parsing certificate...
	I0921 21:59:40.103984  199263 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/cert.pem
	I0921 21:59:40.104010  199263 main.go:134] libmachine: Decoding PEM data...
	I0921 21:59:40.104025  199263 main.go:134] libmachine: Parsing certificate...
	I0921 21:59:40.104423  199263 cli_runner.go:164] Run: docker network inspect calico-20220921215524-10174 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 21:59:40.128203  199263 cli_runner.go:211] docker network inspect calico-20220921215524-10174 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 21:59:40.128270  199263 network_create.go:272] running [docker network inspect calico-20220921215524-10174] to gather additional debugging logs...
	I0921 21:59:40.128288  199263 cli_runner.go:164] Run: docker network inspect calico-20220921215524-10174
	W0921 21:59:40.151487  199263 cli_runner.go:211] docker network inspect calico-20220921215524-10174 returned with exit code 1
	I0921 21:59:40.151518  199263 network_create.go:275] error running [docker network inspect calico-20220921215524-10174]: docker network inspect calico-20220921215524-10174: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: calico-20220921215524-10174
	I0921 21:59:40.151546  199263 network_create.go:277] output of [docker network inspect calico-20220921215524-10174]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: calico-20220921215524-10174
	
	** /stderr **
	I0921 21:59:40.151594  199263 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 21:59:40.174918  199263 network.go:241] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-b7c23e57d062 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:a3:39:9d:03}}
	I0921 21:59:40.175683  199263 network.go:241] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-bfa8cb3d5f9b IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:8c:39:36:0c}}
	I0921 21:59:40.176704  199263 network.go:241] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName:br-454b31bb712a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:f9:25:10:f3}}
	I0921 21:59:40.177580  199263 network.go:241] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName:br-487d4b5affb3 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:d0:0d:8f:9f}}
	I0921 21:59:40.178308  199263 network.go:241] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 Interface:{IfaceName:br-f51059d922f5 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:02:42:4c:02:60:fc}}
	I0921 21:59:40.179089  199263 network.go:290] reserving subnet 192.168.94.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.94.0:0xc000a68570] misses:0}
	I0921 21:59:40.179124  199263 network.go:236] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 21:59:40.179135  199263 network_create.go:115] attempt to create docker network calico-20220921215524-10174 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I0921 21:59:40.179189  199263 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-20220921215524-10174 calico-20220921215524-10174
	I0921 21:59:40.238012  199263 network_create.go:99] docker network calico-20220921215524-10174 192.168.94.0/24 created
	I0921 21:59:40.238042  199263 kic.go:106] calculated static IP "192.168.94.2" for the "calico-20220921215524-10174" container
	I0921 21:59:40.238111  199263 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0921 21:59:40.262697  199263 cli_runner.go:164] Run: docker volume create calico-20220921215524-10174 --label name.minikube.sigs.k8s.io=calico-20220921215524-10174 --label created_by.minikube.sigs.k8s.io=true
	I0921 21:59:40.285337  199263 oci.go:103] Successfully created a docker volume calico-20220921215524-10174
	I0921 21:59:40.285427  199263 cli_runner.go:164] Run: docker run --rm --name calico-20220921215524-10174-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220921215524-10174 --entrypoint /usr/bin/test -v calico-20220921215524-10174:/var gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c -d /var/lib
	I0921 21:59:40.848653  199263 oci.go:107] Successfully prepared a docker volume calico-20220921215524-10174
	I0921 21:59:40.848704  199263 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime containerd
	I0921 21:59:40.848724  199263 kic.go:179] Starting extracting preloaded images to volume ...
	I0921 21:59:40.848795  199263 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.2-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20220921215524-10174:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c -I lz4 -xf /preloaded.tar -C /extractDir
	I0921 21:59:47.537508  199263 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.2-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20220921215524-10174:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c -I lz4 -xf /preloaded.tar -C /extractDir: (6.688619594s)
	I0921 21:59:47.537542  199263 kic.go:188] duration metric: took 6.688814 seconds to extract preloaded images to volume
	W0921 21:59:47.560099  199263 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0921 21:59:47.560255  199263 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0921 21:59:47.678250  199263 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-20220921215524-10174 --name calico-20220921215524-10174 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220921215524-10174 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-20220921215524-10174 --network calico-20220921215524-10174 --ip 192.168.94.2 --volume calico-20220921215524-10174:/var --security-opt apparmor=unconfined --memory=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c
	I0921 21:59:48.135274  199263 cli_runner.go:164] Run: docker container inspect calico-20220921215524-10174 --format={{.State.Running}}
	I0921 21:59:48.162121  199263 cli_runner.go:164] Run: docker container inspect calico-20220921215524-10174 --format={{.State.Status}}
	I0921 21:59:48.191586  199263 cli_runner.go:164] Run: docker exec calico-20220921215524-10174 stat /var/lib/dpkg/alternatives/iptables
	I0921 21:59:48.257935  199263 oci.go:144] the created container "calico-20220921215524-10174" has a running status.
	I0921 21:59:48.257975  199263 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/calico-20220921215524-10174/id_rsa...
	I0921 21:59:48.530238  199263 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/calico-20220921215524-10174/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0921 21:59:48.666423  199263 cli_runner.go:164] Run: docker container inspect calico-20220921215524-10174 --format={{.State.Status}}
	I0921 21:59:48.721673  199263 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0921 21:59:48.721700  199263 kic_runner.go:114] Args: [docker exec --privileged calico-20220921215524-10174 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0921 21:59:48.833541  199263 cli_runner.go:164] Run: docker container inspect calico-20220921215524-10174 --format={{.State.Status}}
	I0921 21:59:48.864845  199263 machine.go:88] provisioning docker machine ...
	I0921 21:59:48.864888  199263 ubuntu.go:169] provisioning hostname "calico-20220921215524-10174"
	I0921 21:59:48.864952  199263 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220921215524-10174
	I0921 21:59:48.901062  199263 main.go:134] libmachine: Using SSH client type: native
	I0921 21:59:48.901293  199263 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ec520] 0x7ef6a0 <nil>  [] 0s} 127.0.0.1 49383 <nil> <nil>}
	I0921 21:59:48.901321  199263 main.go:134] libmachine: About to run SSH command:
	sudo hostname calico-20220921215524-10174 && echo "calico-20220921215524-10174" | sudo tee /etc/hostname
	I0921 21:59:49.055921  199263 main.go:134] libmachine: SSH cmd err, output: <nil>: calico-20220921215524-10174
	
	I0921 21:59:49.056005  199263 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220921215524-10174
	I0921 21:59:49.090831  199263 main.go:134] libmachine: Using SSH client type: native
	I0921 21:59:49.091033  199263 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ec520] 0x7ef6a0 <nil>  [] 0s} 127.0.0.1 49383 <nil> <nil>}
	I0921 21:59:49.091061  199263 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-20220921215524-10174' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-20220921215524-10174/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-20220921215524-10174' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0921 21:59:49.232343  199263 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0921 21:59:49.232382  199263 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube}
	I0921 21:59:49.232428  199263 ubuntu.go:177] setting up certificates
	I0921 21:59:49.232438  199263 provision.go:83] configureAuth start
	I0921 21:59:49.232494  199263 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220921215524-10174
	I0921 21:59:49.262468  199263 provision.go:138] copyHostCerts
	I0921 21:59:49.262526  199263 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/key.pem, removing ...
	I0921 21:59:49.262539  199263 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/key.pem
	I0921 21:59:49.262593  199263 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/key.pem (1675 bytes)
	I0921 21:59:49.262679  199263 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.pem, removing ...
	I0921 21:59:49.262697  199263 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.pem
	I0921 21:59:49.262732  199263 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.pem (1078 bytes)
	I0921 21:59:49.262797  199263 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cert.pem, removing ...
	I0921 21:59:49.262811  199263 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cert.pem
	I0921 21:59:49.262852  199263 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cert.pem (1123 bytes)
	I0921 21:59:49.262922  199263 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca-key.pem org=jenkins.calico-20220921215524-10174 san=[192.168.94.2 127.0.0.1 localhost 127.0.0.1 minikube calico-20220921215524-10174]
	I0921 21:59:49.424950  199263 provision.go:172] copyRemoteCerts
	I0921 21:59:49.425023  199263 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0921 21:59:49.425075  199263 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220921215524-10174
	I0921 21:59:49.453592  199263 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49383 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/calico-20220921215524-10174/id_rsa Username:docker}
	I0921 21:59:49.548394  199263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0921 21:59:49.565872  199263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0921 21:59:49.584040  199263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0921 21:59:49.605741  199263 provision.go:86] duration metric: configureAuth took 373.290915ms
	I0921 21:59:49.605770  199263 ubuntu.go:193] setting minikube options for container-runtime
	I0921 21:59:49.605968  199263 config.go:180] Loaded profile config "calico-20220921215524-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.2
	I0921 21:59:49.605983  199263 machine.go:91] provisioned docker machine in 741.112861ms
	I0921 21:59:49.605991  199263 client.go:171] LocalClient.Create took 9.502288638s
	I0921 21:59:49.606013  199263 start.go:167] duration metric: libmachine.API.Create for "calico-20220921215524-10174" took 9.502367498s
	I0921 21:59:49.606027  199263 start.go:300] post-start starting for "calico-20220921215524-10174" (driver="docker")
	I0921 21:59:49.606040  199263 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0921 21:59:49.606100  199263 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0921 21:59:49.606148  199263 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220921215524-10174
	I0921 21:59:49.639493  199263 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49383 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/calico-20220921215524-10174/id_rsa Username:docker}
	I0921 21:59:49.735980  199263 ssh_runner.go:195] Run: cat /etc/os-release
	I0921 21:59:49.739286  199263 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0921 21:59:49.739315  199263 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0921 21:59:49.739331  199263 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0921 21:59:49.739338  199263 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0921 21:59:49.739354  199263 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/addons for local assets ...
	I0921 21:59:49.739412  199263 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files for local assets ...
	I0921 21:59:49.739517  199263 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/101742.pem -> 101742.pem in /etc/ssl/certs
	I0921 21:59:49.739630  199263 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0921 21:59:49.747516  199263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/101742.pem --> /etc/ssl/certs/101742.pem (1708 bytes)
	I0921 21:59:49.765461  199263 start.go:303] post-start completed in 159.417724ms
	I0921 21:59:49.765764  199263 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220921215524-10174
	I0921 21:59:49.798018  199263 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/calico-20220921215524-10174/config.json ...
	I0921 21:59:49.798317  199263 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 21:59:49.798368  199263 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220921215524-10174
	I0921 21:59:49.832302  199263 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49383 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/calico-20220921215524-10174/id_rsa Username:docker}
	I0921 21:59:49.925349  199263 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 21:59:49.933752  199263 start.go:128] duration metric: createHost completed in 9.832982259s
	I0921 21:59:49.933781  199263 start.go:83] releasing machines lock for "calico-20220921215524-10174", held for 9.833151725s
	I0921 21:59:49.933875  199263 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220921215524-10174
	I0921 21:59:49.965412  199263 ssh_runner.go:195] Run: systemctl --version
	I0921 21:59:49.965448  199263 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0921 21:59:49.965469  199263 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220921215524-10174
	I0921 21:59:49.965501  199263 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220921215524-10174
	I0921 21:59:49.996669  199263 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49383 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/calico-20220921215524-10174/id_rsa Username:docker}
	I0921 21:59:49.999090  199263 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49383 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/calico-20220921215524-10174/id_rsa Username:docker}
	I0921 21:59:50.093114  199263 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0921 21:59:50.137577  199263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0921 21:59:50.149818  199263 docker.go:188] disabling docker service ...
	I0921 21:59:50.149881  199263 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0921 21:59:50.188580  199263 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0921 21:59:50.199105  199263 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0921 21:59:50.284180  199263 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0921 21:59:50.364988  199263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0921 21:59:50.373813  199263 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0921 21:59:50.386152  199263 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "registry.k8s.io/pause:3.8"|' -i /etc/containerd/config.toml"
	I0921 21:59:50.393956  199263 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0921 21:59:50.401505  199263 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0921 21:59:50.409295  199263 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.d"|' -i /etc/containerd/config.toml"
	I0921 21:59:50.417196  199263 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0921 21:59:50.423397  199263 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0921 21:59:50.429727  199263 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0921 21:59:50.509189  199263 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0921 21:59:50.604957  199263 start.go:450] Will wait 60s for socket path /run/containerd/containerd.sock
	I0921 21:59:50.605026  199263 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0921 21:59:50.608900  199263 start.go:471] Will wait 60s for crictl version
	I0921 21:59:50.608950  199263 ssh_runner.go:195] Run: sudo crictl version
	I0921 21:59:50.635036  199263 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-09-21T21:59:50Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0921 22:00:01.683155  199263 ssh_runner.go:195] Run: sudo crictl version
	I0921 22:00:01.707362  199263 start.go:480] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.8
	RuntimeApiVersion:  v1alpha2
	I0921 22:00:01.707433  199263 ssh_runner.go:195] Run: containerd --version
	I0921 22:00:01.739812  199263 ssh_runner.go:195] Run: containerd --version
	I0921 22:00:01.777113  199263 out.go:177] * Preparing Kubernetes v1.25.2 on containerd 1.6.8 ...
	I0921 22:00:01.778515  199263 cli_runner.go:164] Run: docker network inspect calico-20220921215524-10174 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 22:00:01.805376  199263 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0921 22:00:01.809573  199263 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0921 22:00:01.820791  199263 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime containerd
	I0921 22:00:01.820874  199263 ssh_runner.go:195] Run: sudo crictl images --output json
	I0921 22:00:01.848123  199263 containerd.go:553] all images are preloaded for containerd runtime.
	I0921 22:00:01.848148  199263 containerd.go:467] Images already preloaded, skipping extraction
	I0921 22:00:01.848199  199263 ssh_runner.go:195] Run: sudo crictl images --output json
	I0921 22:00:01.876902  199263 containerd.go:553] all images are preloaded for containerd runtime.
	I0921 22:00:01.876932  199263 cache_images.go:84] Images are preloaded, skipping loading
	I0921 22:00:01.876987  199263 ssh_runner.go:195] Run: sudo crictl info
	I0921 22:00:01.901678  199263 cni.go:95] Creating CNI manager for "calico"
	I0921 22:00:01.901713  199263 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0921 22:00:01.901740  199263 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.25.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-20220921215524-10174 NodeName:calico-20220921215524-10174 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
	I0921 22:00:01.901922  199263 kubeadm.go:161] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "calico-20220921215524-10174"
	  kubeletExtraArgs:
	    node-ip: 192.168.94.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0921 22:00:01.902043  199263 kubeadm.go:962] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=calico-20220921215524-10174 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.2 ClusterName:calico-20220921215524-10174 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:}
	I0921 22:00:01.902107  199263 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.2
	I0921 22:00:01.909470  199263 binaries.go:44] Found k8s binaries, skipping transfer
	I0921 22:00:01.909550  199263 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0921 22:00:01.916493  199263 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (520 bytes)
	I0921 22:00:01.929449  199263 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0921 22:00:01.942570  199263 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2056 bytes)
	I0921 22:00:01.955238  199263 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0921 22:00:01.958275  199263 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0921 22:00:01.967628  199263 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/calico-20220921215524-10174 for IP: 192.168.94.2
	I0921 22:00:01.967785  199263 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.key
	I0921 22:00:01.967836  199263 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/proxy-client-ca.key
	I0921 22:00:01.967900  199263 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/calico-20220921215524-10174/client.key
	I0921 22:00:01.967925  199263 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/calico-20220921215524-10174/client.crt with IP's: []
	I0921 22:00:02.272514  199263 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/calico-20220921215524-10174/client.crt ...
	I0921 22:00:02.272541  199263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/calico-20220921215524-10174/client.crt: {Name:mk85e8295f8f475ffcf1187e70fe976fc9bebec3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:00:02.272744  199263 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/calico-20220921215524-10174/client.key ...
	I0921 22:00:02.272763  199263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/calico-20220921215524-10174/client.key: {Name:mkaa0e630afd4787c202832f4abf9e6f9a5cfaf5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:00:02.272852  199263 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/calico-20220921215524-10174/apiserver.key.ad8e880a
	I0921 22:00:02.272868  199263 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/calico-20220921215524-10174/apiserver.crt.ad8e880a with IP's: [192.168.94.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0921 22:00:02.460093  199263 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/calico-20220921215524-10174/apiserver.crt.ad8e880a ...
	I0921 22:00:02.460131  199263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/calico-20220921215524-10174/apiserver.crt.ad8e880a: {Name:mkb6e81c23e9f3700a4e49f45e151f5356460951 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:00:02.460324  199263 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/calico-20220921215524-10174/apiserver.key.ad8e880a ...
	I0921 22:00:02.460340  199263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/calico-20220921215524-10174/apiserver.key.ad8e880a: {Name:mkf04c2e50b92244bbc0c7e81bd4014cd64afb44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:00:02.460427  199263 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/calico-20220921215524-10174/apiserver.crt.ad8e880a -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/calico-20220921215524-10174/apiserver.crt
	I0921 22:00:02.460484  199263 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/calico-20220921215524-10174/apiserver.key.ad8e880a -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/calico-20220921215524-10174/apiserver.key
	I0921 22:00:02.460531  199263 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/calico-20220921215524-10174/proxy-client.key
	I0921 22:00:02.460545  199263 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/calico-20220921215524-10174/proxy-client.crt with IP's: []
	I0921 22:00:02.584096  199263 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/calico-20220921215524-10174/proxy-client.crt ...
	I0921 22:00:02.584127  199263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/calico-20220921215524-10174/proxy-client.crt: {Name:mk0537a75591ae3946504206c23f128957ff406f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:00:02.584324  199263 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/calico-20220921215524-10174/proxy-client.key ...
	I0921 22:00:02.584339  199263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/calico-20220921215524-10174/proxy-client.key: {Name:mk5e99f72efe4d8f727d3999513382a489d20424 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:00:02.584511  199263 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/10174.pem (1338 bytes)
	W0921 22:00:02.584548  199263 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/10174_empty.pem, impossibly tiny 0 bytes
	I0921 22:00:02.584562  199263 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca-key.pem (1679 bytes)
	I0921 22:00:02.584590  199263 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem (1078 bytes)
	I0921 22:00:02.584616  199263 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/cert.pem (1123 bytes)
	I0921 22:00:02.584639  199263 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/key.pem (1675 bytes)
	I0921 22:00:02.584683  199263 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/101742.pem (1708 bytes)
	I0921 22:00:02.585289  199263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/calico-20220921215524-10174/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0921 22:00:02.605298  199263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/calico-20220921215524-10174/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0921 22:00:02.622981  199263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/calico-20220921215524-10174/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0921 22:00:02.640978  199263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/calico-20220921215524-10174/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0921 22:00:02.658772  199263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0921 22:00:02.675725  199263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0921 22:00:02.692898  199263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0921 22:00:02.709665  199263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0921 22:00:02.725987  199263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0921 22:00:02.743611  199263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/10174.pem --> /usr/share/ca-certificates/10174.pem (1338 bytes)
	I0921 22:00:02.760999  199263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/101742.pem --> /usr/share/ca-certificates/101742.pem (1708 bytes)
	I0921 22:00:02.778287  199263 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0921 22:00:02.793089  199263 ssh_runner.go:195] Run: openssl version
	I0921 22:00:02.798134  199263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0921 22:00:02.806315  199263 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0921 22:00:02.809834  199263 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Sep 21 21:28 /usr/share/ca-certificates/minikubeCA.pem
	I0921 22:00:02.809899  199263 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0921 22:00:02.815703  199263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0921 22:00:02.824751  199263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10174.pem && ln -fs /usr/share/ca-certificates/10174.pem /etc/ssl/certs/10174.pem"
	I0921 22:00:02.834966  199263 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10174.pem
	I0921 22:00:02.838442  199263 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Sep 21 21:32 /usr/share/ca-certificates/10174.pem
	I0921 22:00:02.838508  199263 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10174.pem
	I0921 22:00:02.845169  199263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10174.pem /etc/ssl/certs/51391683.0"
	I0921 22:00:02.854821  199263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/101742.pem && ln -fs /usr/share/ca-certificates/101742.pem /etc/ssl/certs/101742.pem"
	I0921 22:00:02.864973  199263 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/101742.pem
	I0921 22:00:02.868270  199263 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Sep 21 21:32 /usr/share/ca-certificates/101742.pem
	I0921 22:00:02.868324  199263 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/101742.pem
	I0921 22:00:02.873565  199263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/101742.pem /etc/ssl/certs/3ec20f2e.0"
	I0921 22:00:02.881508  199263 kubeadm.go:396] StartCluster: {Name:calico-20220921215524-10174 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:calico-20220921215524-10174 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 22:00:02.881627  199263 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0921 22:00:02.881681  199263 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0921 22:00:02.908112  199263 cri.go:87] found id: ""
	I0921 22:00:02.908229  199263 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0921 22:00:02.915494  199263 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0921 22:00:02.989285  199263 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0921 22:00:02.989350  199263 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0921 22:00:03.021076  199263 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0921 22:00:03.021153  199263 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0921 22:00:03.067227  199263 kubeadm.go:317] [init] Using Kubernetes version: v1.25.2
	I0921 22:00:03.067297  199263 kubeadm.go:317] [preflight] Running pre-flight checks
	I0921 22:00:03.095697  199263 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I0921 22:00:03.095849  199263 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1017-gcp
	I0921 22:00:03.095911  199263 kubeadm.go:317] OS: Linux
	I0921 22:00:03.095977  199263 kubeadm.go:317] CGROUPS_CPU: enabled
	I0921 22:00:03.096044  199263 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I0921 22:00:03.096102  199263 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I0921 22:00:03.096174  199263 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I0921 22:00:03.096233  199263 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I0921 22:00:03.096304  199263 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I0921 22:00:03.096375  199263 kubeadm.go:317] CGROUPS_PIDS: enabled
	I0921 22:00:03.096446  199263 kubeadm.go:317] CGROUPS_HUGETLB: enabled
	I0921 22:00:03.096520  199263 kubeadm.go:317] CGROUPS_BLKIO: enabled
	I0921 22:00:03.160883  199263 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0921 22:00:03.161022  199263 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0921 22:00:03.161176  199263 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0921 22:00:03.276834  199263 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0921 22:00:03.568448  199263 out.go:204]   - Generating certificates and keys ...
	I0921 22:00:03.568635  199263 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0921 22:00:03.568746  199263 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0921 22:00:03.568852  199263 kubeadm.go:317] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0921 22:00:03.665822  199263 kubeadm.go:317] [certs] Generating "front-proxy-ca" certificate and key
	I0921 22:00:03.944670  199263 kubeadm.go:317] [certs] Generating "front-proxy-client" certificate and key
	I0921 22:00:03.992311  199263 kubeadm.go:317] [certs] Generating "etcd/ca" certificate and key
	I0921 22:00:04.068389  199263 kubeadm.go:317] [certs] Generating "etcd/server" certificate and key
	I0921 22:00:04.068710  199263 kubeadm.go:317] [certs] etcd/server serving cert is signed for DNS names [calico-20220921215524-10174 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I0921 22:00:04.240459  199263 kubeadm.go:317] [certs] Generating "etcd/peer" certificate and key
	I0921 22:00:04.240719  199263 kubeadm.go:317] [certs] etcd/peer serving cert is signed for DNS names [calico-20220921215524-10174 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I0921 22:00:04.367883  199263 kubeadm.go:317] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0921 22:00:04.557439  199263 kubeadm.go:317] [certs] Generating "apiserver-etcd-client" certificate and key
	I0921 22:00:04.780440  199263 kubeadm.go:317] [certs] Generating "sa" key and public key
	I0921 22:00:04.780576  199263 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0921 22:00:04.904630  199263 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0921 22:00:05.104861  199263 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0921 22:00:05.245441  199263 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0921 22:00:05.362806  199263 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0921 22:00:05.437484  199263 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0921 22:00:05.438421  199263 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0921 22:00:05.438502  199263 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I0921 22:00:05.526173  199263 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0921 22:00:05.669255  199263 out.go:204]   - Booting up control plane ...
	I0921 22:00:05.669466  199263 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0921 22:00:05.669579  199263 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0921 22:00:05.669670  199263 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0921 22:00:05.669764  199263 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0921 22:00:05.669963  199263 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0921 22:00:13.145102  199263 kubeadm.go:317] [apiclient] All control plane components are healthy after 7.610581 seconds
	I0921 22:00:13.145268  199263 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0921 22:00:13.155103  199263 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0921 22:00:13.671049  199263 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs
	I0921 22:00:13.671310  199263 kubeadm.go:317] [mark-control-plane] Marking the node calico-20220921215524-10174 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0921 22:00:14.178975  199263 kubeadm.go:317] [bootstrap-token] Using token: jysb9b.0qft6t7x7hm9cvef
	I0921 22:00:14.180499  199263 out.go:204]   - Configuring RBAC rules ...
	I0921 22:00:14.180658  199263 kubeadm.go:317] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0921 22:00:14.183770  199263 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0921 22:00:14.188847  199263 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0921 22:00:14.191005  199263 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0921 22:00:14.193171  199263 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0921 22:00:14.195204  199263 kubeadm.go:317] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0921 22:00:14.202861  199263 kubeadm.go:317] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0921 22:00:14.422819  199263 kubeadm.go:317] [addons] Applied essential addon: CoreDNS
	I0921 22:00:14.588139  199263 kubeadm.go:317] [addons] Applied essential addon: kube-proxy
	I0921 22:00:14.589761  199263 kubeadm.go:317] 
	I0921 22:00:14.589878  199263 kubeadm.go:317] Your Kubernetes control-plane has initialized successfully!
	I0921 22:00:14.589900  199263 kubeadm.go:317] 
	I0921 22:00:14.590017  199263 kubeadm.go:317] To start using your cluster, you need to run the following as a regular user:
	I0921 22:00:14.590033  199263 kubeadm.go:317] 
	I0921 22:00:14.590066  199263 kubeadm.go:317]   mkdir -p $HOME/.kube
	I0921 22:00:14.596143  199263 kubeadm.go:317]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0921 22:00:14.596225  199263 kubeadm.go:317]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0921 22:00:14.596239  199263 kubeadm.go:317] 
	I0921 22:00:14.596319  199263 kubeadm.go:317] Alternatively, if you are the root user, you can run:
	I0921 22:00:14.596338  199263 kubeadm.go:317] 
	I0921 22:00:14.596404  199263 kubeadm.go:317]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0921 22:00:14.596423  199263 kubeadm.go:317] 
	I0921 22:00:14.596493  199263 kubeadm.go:317] You should now deploy a pod network to the cluster.
	I0921 22:00:14.596621  199263 kubeadm.go:317] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0921 22:00:14.596728  199263 kubeadm.go:317]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0921 22:00:14.596743  199263 kubeadm.go:317] 
	I0921 22:00:14.596850  199263 kubeadm.go:317] You can now join any number of control-plane nodes by copying certificate authorities
	I0921 22:00:14.596963  199263 kubeadm.go:317] and service account keys on each node and then running the following as root:
	I0921 22:00:14.596976  199263 kubeadm.go:317] 
	I0921 22:00:14.597079  199263 kubeadm.go:317]   kubeadm join control-plane.minikube.internal:8443 --token jysb9b.0qft6t7x7hm9cvef \
	I0921 22:00:14.597231  199263 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:b419e756666ce2e30bdac31039f21ad7242d26e2b9c03a55c5bc6fe8dcfddcf7 \
	I0921 22:00:14.597264  199263 kubeadm.go:317] 	--control-plane 
	I0921 22:00:14.597278  199263 kubeadm.go:317] 
	I0921 22:00:14.597401  199263 kubeadm.go:317] Then you can join any number of worker nodes by running the following on each as root:
	I0921 22:00:14.597415  199263 kubeadm.go:317] 
	I0921 22:00:14.597517  199263 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8443 --token jysb9b.0qft6t7x7hm9cvef \
	I0921 22:00:14.597641  199263 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:b419e756666ce2e30bdac31039f21ad7242d26e2b9c03a55c5bc6fe8dcfddcf7 
	I0921 22:00:14.600642  199263 kubeadm.go:317] W0921 22:00:03.059044     746 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0921 22:00:14.600939  199263 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1017-gcp\n", err: exit status 1
	I0921 22:00:14.601108  199263 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0921 22:00:14.601146  199263 cni.go:95] Creating CNI manager for "calico"
	I0921 22:00:14.603036  199263 out.go:177] * Configuring Calico (Container Networking Interface) ...
	I0921 22:00:14.604877  199263 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.2/kubectl ...
	I0921 22:00:14.604900  199263 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (202045 bytes)
	I0921 22:00:14.623839  199263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0921 22:00:16.324978  199263 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.25.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.701098613s)
	I0921 22:00:16.325095  199263 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0921 22:00:16.325167  199263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:00:16.325239  199263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl label nodes minikube.k8s.io/version=v1.27.0 minikube.k8s.io/commit=937c68716dfaac5b5ffa3b6655158d5d3472b8c4 minikube.k8s.io/name=calico-20220921215524-10174 minikube.k8s.io/updated_at=2022_09_21T22_00_16_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:00:16.446750  199263 ops.go:34] apiserver oom_adj: -16
	I0921 22:00:16.446851  199263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:00:17.024928  199263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:00:17.524931  199263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:00:18.024965  199263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:00:18.524704  199263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:00:19.024953  199263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:00:19.524667  199263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:00:20.024404  199263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:00:20.524943  199263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:00:21.025377  199263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:00:21.524551  199263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:00:22.025293  199263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:00:22.524911  199263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:00:23.024923  199263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:00:23.524934  199263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:00:24.024643  199263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:00:24.524984  199263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:00:25.024605  199263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:00:25.525007  199263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:00:26.024478  199263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:00:26.524980  199263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:00:27.024525  199263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:00:27.088876  199263 kubeadm.go:1067] duration metric: took 10.763755639s to wait for elevateKubeSystemPrivileges.
	I0921 22:00:27.088915  199263 kubeadm.go:398] StartCluster complete in 24.207417181s
	I0921 22:00:27.088935  199263 settings.go:142] acquiring lock: {Name:mk2e017b0c75e33bad2b3a546d00214b38a21694 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:00:27.089039  199263 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
	I0921 22:00:27.090892  199263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig: {Name:mkdec5faf1bb182b50a3cd5458b352f38075dc15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:00:27.608281  199263 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "calico-20220921215524-10174" rescaled to 1
	I0921 22:00:27.608343  199263 start.go:211] Will wait 5m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0921 22:00:27.610162  199263 out.go:177] * Verifying Kubernetes components...
	I0921 22:00:27.608398  199263 addons.go:412] enableAddons start: toEnable=map[], additional=[]
	I0921 22:00:27.608410  199263 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0921 22:00:27.608621  199263 config.go:180] Loaded profile config "calico-20220921215524-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.2
	I0921 22:00:27.611461  199263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0921 22:00:27.611488  199263 addons.go:65] Setting storage-provisioner=true in profile "calico-20220921215524-10174"
	I0921 22:00:27.611503  199263 addons.go:65] Setting default-storageclass=true in profile "calico-20220921215524-10174"
	I0921 22:00:27.611511  199263 addons.go:153] Setting addon storage-provisioner=true in "calico-20220921215524-10174"
	W0921 22:00:27.611518  199263 addons.go:162] addon storage-provisioner should already be in state true
	I0921 22:00:27.611521  199263 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-20220921215524-10174"
	I0921 22:00:27.611571  199263 host.go:66] Checking if "calico-20220921215524-10174" exists ...
	I0921 22:00:27.611955  199263 cli_runner.go:164] Run: docker container inspect calico-20220921215524-10174 --format={{.State.Status}}
	I0921 22:00:27.612652  199263 cli_runner.go:164] Run: docker container inspect calico-20220921215524-10174 --format={{.State.Status}}
	I0921 22:00:27.651809  199263 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0921 22:00:27.653139  199263 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0921 22:00:27.653164  199263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0921 22:00:27.653225  199263 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220921215524-10174
	I0921 22:00:27.664846  199263 addons.go:153] Setting addon default-storageclass=true in "calico-20220921215524-10174"
	W0921 22:00:27.664878  199263 addons.go:162] addon default-storageclass should already be in state true
	I0921 22:00:27.664908  199263 host.go:66] Checking if "calico-20220921215524-10174" exists ...
	I0921 22:00:27.665400  199263 cli_runner.go:164] Run: docker container inspect calico-20220921215524-10174 --format={{.State.Status}}
	I0921 22:00:27.689259  199263 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49383 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/calico-20220921215524-10174/id_rsa Username:docker}
	I0921 22:00:27.697161  199263 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0921 22:00:27.697187  199263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0921 22:00:27.697237  199263 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220921215524-10174
	I0921 22:00:27.725964  199263 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49383 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/calico-20220921215524-10174/id_rsa Username:docker}
	I0921 22:00:27.729333  199263 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0921 22:00:27.730258  199263 node_ready.go:35] waiting up to 5m0s for node "calico-20220921215524-10174" to be "Ready" ...
	I0921 22:00:27.733139  199263 node_ready.go:49] node "calico-20220921215524-10174" has status "Ready":"True"
	I0921 22:00:27.733159  199263 node_ready.go:38] duration metric: took 2.873319ms waiting for node "calico-20220921215524-10174" to be "Ready" ...
	I0921 22:00:27.733168  199263 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0921 22:00:27.743289  199263 pod_ready.go:78] waiting up to 5m0s for pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace to be "Ready" ...
	I0921 22:00:27.803352  199263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0921 22:00:27.910782  199263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0921 22:00:29.204226  199263 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.474849303s)
	I0921 22:00:29.204271  199263 start.go:810] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS
	I0921 22:00:29.302479  199263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.499078389s)
	I0921 22:00:29.302568  199263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.391748427s)
	I0921 22:00:29.304384  199263 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0921 22:00:29.305647  199263 addons.go:414] enableAddons completed in 1.697247448s
	I0921 22:00:29.785184  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:00:32.284130  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:00:34.783534  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:00:36.785920  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:00:39.284087  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:00:41.783291  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:00:43.785405  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:00:45.785442  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:00:48.283658  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:00:50.284399  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:00:52.784819  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:00:54.785661  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:00:57.284528  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:00:59.785528  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:01:02.283990  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:01:04.785093  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:01:07.285033  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:01:09.785554  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:01:12.283317  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:01:14.785419  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:01:17.284489  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:01:19.785554  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:01:21.788431  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:01:24.282795  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:01:26.283052  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:01:28.284721  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:01:30.783006  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:01:32.786124  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:01:35.284259  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:01:37.286508  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:01:39.783829  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:01:41.787421  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:01:44.283230  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:01:46.785358  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:01:49.283231  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:01:51.784647  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:01:53.785717  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:01:56.284041  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:01:58.783815  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:02:00.783923  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:02:03.283561  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:02:05.784837  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:02:07.785470  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:02:10.283603  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:02:12.785065  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:02:14.785360  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:02:16.785760  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:02:18.786632  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:02:21.282907  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:02:23.283162  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:02:25.283561  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:02:27.284451  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:02:29.782771  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:02:31.785078  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:02:34.283276  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:02:36.784618  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:02:38.785789  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:02:41.282949  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:02:43.283158  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:02:45.783545  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:02:48.283437  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:02:50.283654  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:02:52.785091  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:02:54.785272  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:02:57.286226  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:02:59.784817  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:03:01.785748  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:03:04.283055  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:03:06.283163  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:03:08.785308  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:03:11.283557  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:03:13.784480  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:03:15.784531  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:03:17.785041  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:03:19.785311  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:03:22.282513  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:03:24.282757  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:03:26.282981  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:03:28.283100  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:03:30.785240  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:03:33.283585  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:03:35.784442  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:03:37.784725  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:03:39.784865  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:03:42.282777  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:03:44.282942  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:03:46.783254  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:03:49.283243  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:03:51.783288  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:03:53.784868  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:03:55.786215  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:03:58.282766  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:04:00.782996  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:04:02.784677  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:04:05.285308  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:04:07.783365  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:04:09.785992  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:04:12.282740  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:04:14.283416  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:04:16.785048  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:04:19.283886  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:04:21.783586  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:04:24.283127  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:04:26.283486  199263 pod_ready.go:102] pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace has status "Ready":"False"
	I0921 22:04:27.790035  199263 pod_ready.go:81] duration metric: took 4m0.046661281s waiting for pod "calico-kube-controllers-7df895d496-vk6vs" in "kube-system" namespace to be "Ready" ...
	E0921 22:04:27.790062  199263 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0921 22:04:27.790073  199263 pod_ready.go:78] waiting up to 5m0s for pod "calico-node-lcxq5" in "kube-system" namespace to be "Ready" ...
	I0921 22:04:29.800454  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:04:32.300614  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:04:34.301269  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:04:36.301807  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:04:38.302950  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:04:40.802624  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:04:43.301398  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:04:45.801155  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:04:47.802184  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:04:50.301064  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:04:52.302412  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:04:54.800402  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:04:56.801873  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:04:58.812533  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:05:01.303906  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:05:03.802209  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:05:06.300760  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:05:08.301767  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:05:10.301882  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:05:12.800952  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:05:14.801720  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:05:17.302351  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:05:19.800948  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:05:22.301833  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:05:24.800928  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:05:26.801077  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:05:29.300473  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:05:31.301096  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:05:33.801541  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:05:36.301865  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:05:38.800749  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:05:40.803297  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:05:43.302011  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:05:45.801083  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:05:47.801163  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:05:49.801209  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:05:52.301723  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:05:54.302243  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:05:56.801721  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:05:59.300921  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:06:01.301844  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:06:03.801750  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:06:05.803234  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:06:08.301210  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:06:10.802098  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:06:12.803624  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:06:15.301542  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:06:17.302332  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:06:19.801295  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:06:21.802170  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:06:24.301246  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:06:26.301676  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:06:28.800907  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:06:30.801462  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:06:33.301821  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:06:35.801897  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:06:38.301233  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:06:40.800629  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:06:42.802621  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:06:45.301161  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:06:47.302495  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:06:49.803053  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:06:52.301468  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:06:54.800733  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:06:56.801720  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:06:59.301405  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:07:01.301713  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:07:03.302728  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:07:05.801280  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:07:08.300112  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:07:10.301896  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:07:12.802839  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:07:15.300709  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:07:17.302067  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:07:19.304244  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:07:21.801272  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:07:23.801482  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:07:25.801761  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:07:27.801856  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:07:30.301839  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:07:32.801927  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:07:34.801960  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:07:37.302473  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:07:39.802281  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:07:42.301089  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:07:44.301346  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:07:46.301546  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:07:48.801671  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:07:51.301532  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:07:53.801208  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:07:56.300909  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:07:58.301176  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:08:00.800740  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:08:02.802010  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:08:05.302780  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:08:07.801147  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:08:09.801330  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:08:11.801656  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:08:13.803616  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:08:16.301159  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:08:18.303243  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:08:20.800890  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:08:22.801657  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:08:25.301715  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:08:27.801161  199263 pod_ready.go:102] pod "calico-node-lcxq5" in "kube-system" namespace has status "Ready":"False"
	I0921 22:08:27.806679  199263 pod_ready.go:81] duration metric: took 4m0.01659466s waiting for pod "calico-node-lcxq5" in "kube-system" namespace to be "Ready" ...
	E0921 22:08:27.806708  199263 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0921 22:08:27.806720  199263 pod_ready.go:38] duration metric: took 8m0.073540902s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0921 22:08:27.808921  199263 out.go:177] 
	W0921 22:08:27.810384  199263 out.go:239] X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	W0921 22:08:27.810411  199263 out.go:239] * 
	* 
	W0921 22:08:27.811334  199263 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0921 22:08:27.813421  199263 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (528.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (309.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220921215523-10174 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220921215523-10174 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.134203403s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220921215523-10174 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220921215523-10174 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.131413467s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220921215523-10174 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220921215523-10174 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.128789429s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220921215523-10174 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220921215523-10174 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.127056329s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220921215523-10174 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220921215523-10174 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.131724409s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220921215523-10174 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220921215523-10174 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.129460286s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220921215523-10174 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220921215523-10174 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.130980292s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0921 22:04:20.481650   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/functional-20220921213235-10174/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220921215523-10174 exec deployment/netcat -- nslookup kubernetes.default
E0921 22:04:27.009812   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/auto-20220921215523-10174/client.crt: no such file or directory
E0921 22:04:27.015055   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/auto-20220921215523-10174/client.crt: no such file or directory
E0921 22:04:27.025283   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/auto-20220921215523-10174/client.crt: no such file or directory
E0921 22:04:27.045555   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/auto-20220921215523-10174/client.crt: no such file or directory
E0921 22:04:27.085829   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/auto-20220921215523-10174/client.crt: no such file or directory
E0921 22:04:27.166130   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/auto-20220921215523-10174/client.crt: no such file or directory
E0921 22:04:27.326807   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/auto-20220921215523-10174/client.crt: no such file or directory
E0921 22:04:27.647385   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/auto-20220921215523-10174/client.crt: no such file or directory
E0921 22:04:28.287593   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/auto-20220921215523-10174/client.crt: no such file or directory
E0921 22:04:29.568345   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/auto-20220921215523-10174/client.crt: no such file or directory
E0921 22:04:32.128551   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/auto-20220921215523-10174/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220921215523-10174 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.12271172s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220921215523-10174 exec deployment/netcat -- nslookup kubernetes.default
E0921 22:05:02.132082   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/kindnet-20220921215523-10174/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220921215523-10174 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.157683639s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0921 22:05:22.612532   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/kindnet-20220921215523-10174/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220921215523-10174 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220921215523-10174 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.128511615s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0921 22:06:07.267910   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/cilium-20220921215524-10174/client.crt: no such file or directory
E0921 22:06:12.388920   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/cilium-20220921215524-10174/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220921215523-10174 exec deployment/netcat -- nslookup kubernetes.default
E0921 22:07:10.852702   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/auto-20220921215523-10174/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220921215523-10174 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.133035566s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: failed to do nslookup on kubernetes.default: exit status 1
net_test.go:180: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/bridge/DNS (309.67s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (277.81s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-20220921220439-10174 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.2
E0921 22:04:41.650507   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/kindnet-20220921215523-10174/client.crt: no such file or directory
E0921 22:04:41.655795   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/kindnet-20220921215523-10174/client.crt: no such file or directory
E0921 22:04:41.666008   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/kindnet-20220921215523-10174/client.crt: no such file or directory
E0921 22:04:41.686286   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/kindnet-20220921215523-10174/client.crt: no such file or directory
E0921 22:04:41.726535   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/kindnet-20220921215523-10174/client.crt: no such file or directory
E0921 22:04:41.806844   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/kindnet-20220921215523-10174/client.crt: no such file or directory
E0921 22:04:41.967330   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/kindnet-20220921215523-10174/client.crt: no such file or directory
E0921 22:04:42.287833   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/kindnet-20220921215523-10174/client.crt: no such file or directory
E0921 22:04:42.928724   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/kindnet-20220921215523-10174/client.crt: no such file or directory
E0921 22:04:44.209347   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/kindnet-20220921215523-10174/client.crt: no such file or directory
E0921 22:04:46.770307   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/kindnet-20220921215523-10174/client.crt: no such file or directory
E0921 22:04:47.489804   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/auto-20220921215523-10174/client.crt: no such file or directory
E0921 22:04:51.891282   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/kindnet-20220921215523-10174/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p embed-certs-20220921220439-10174 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.2: exit status 80 (4m35.925179106s)

                                                
                                                
-- stdout --
	* [embed-certs-20220921220439-10174] minikube v1.27.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14995
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting control plane node embed-certs-20220921220439-10174 in cluster embed-certs-20220921220439-10174
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.25.2 on containerd 1.6.8 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0921 22:04:39.679609  228234 out.go:296] Setting OutFile to fd 1 ...
	I0921 22:04:39.679785  228234 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:04:39.679796  228234 out.go:309] Setting ErrFile to fd 2...
	I0921 22:04:39.679803  228234 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:04:39.679922  228234 root.go:333] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/bin
	I0921 22:04:39.680511  228234 out.go:303] Setting JSON to false
	I0921 22:04:39.682038  228234 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":2831,"bootTime":1663795049,"procs":721,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1017-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0921 22:04:39.682107  228234 start.go:125] virtualization: kvm guest
	I0921 22:04:39.684959  228234 out.go:177] * [embed-certs-20220921220439-10174] minikube v1.27.0 on Ubuntu 20.04 (kvm/amd64)
	I0921 22:04:39.686397  228234 out.go:177]   - MINIKUBE_LOCATION=14995
	I0921 22:04:39.686429  228234 notify.go:214] Checking for updates...
	I0921 22:04:39.687882  228234 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0921 22:04:39.689512  228234 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
	I0921 22:04:39.691111  228234 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube
	I0921 22:04:39.692493  228234 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0921 22:04:39.694183  228234 config.go:180] Loaded profile config "bridge-20220921215523-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.2
	I0921 22:04:39.694276  228234 config.go:180] Loaded profile config "calico-20220921215524-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.2
	I0921 22:04:39.694354  228234 config.go:180] Loaded profile config "enable-default-cni-20220921215523-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.2
	I0921 22:04:39.694397  228234 driver.go:365] Setting default libvirt URI to qemu:///system
	I0921 22:04:39.726360  228234 docker.go:137] docker version: linux-20.10.18
	I0921 22:04:39.726466  228234 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:04:39.825214  228234 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:true NGoroutines:49 SystemTime:2022-09-21 22:04:39.748646162 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1017-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.18 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0921 22:04:39.825406  228234 docker.go:254] overlay module found
	I0921 22:04:39.827940  228234 out.go:177] * Using the docker driver based on user configuration
	I0921 22:04:39.829534  228234 start.go:284] selected driver: docker
	I0921 22:04:39.829557  228234 start.go:808] validating driver "docker" against <nil>
	I0921 22:04:39.829575  228234 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0921 22:04:39.830679  228234 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:04:39.927675  228234 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:true NGoroutines:49 SystemTime:2022-09-21 22:04:39.851879511 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1017-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.18 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0921 22:04:39.927845  228234 start_flags.go:302] no existing cluster config was found, will generate one from the flags 
	I0921 22:04:39.927990  228234 start_flags.go:867] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0921 22:04:39.930004  228234 out.go:177] * Using Docker driver with root privileges
	I0921 22:04:39.931285  228234 cni.go:95] Creating CNI manager for ""
	I0921 22:04:39.931305  228234 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0921 22:04:39.931321  228234 start_flags.go:311] Found "CNI" CNI - setting NetworkPlugin=cni
	I0921 22:04:39.931334  228234 start_flags.go:316] config:
	{Name:embed-certs-20220921220439-10174 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:embed-certs-20220921220439-10174 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRun
time:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 22:04:39.932902  228234 out.go:177] * Starting control plane node embed-certs-20220921220439-10174 in cluster embed-certs-20220921220439-10174
	I0921 22:04:39.934194  228234 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0921 22:04:39.935411  228234 out.go:177] * Pulling base image ...
	I0921 22:04:39.936532  228234 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime containerd
	I0921 22:04:39.936555  228234 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	I0921 22:04:39.936565  228234 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.2-containerd-overlay2-amd64.tar.lz4
	I0921 22:04:39.936574  228234 cache.go:57] Caching tarball of preloaded images
	I0921 22:04:39.936812  228234 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0921 22:04:39.936828  228234 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.2 on containerd
	I0921 22:04:39.936926  228234 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/embed-certs-20220921220439-10174/config.json ...
	I0921 22:04:39.936949  228234 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/embed-certs-20220921220439-10174/config.json: {Name:mkc9e8eba9b4298e9a46c68840d31b3ff8e82be3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:04:39.963313  228234 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon, skipping pull
	I0921 22:04:39.963340  228234 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c exists in daemon, skipping load
	I0921 22:04:39.963356  228234 cache.go:208] Successfully downloaded all kic artifacts
	I0921 22:04:39.963393  228234 start.go:364] acquiring machines lock for embed-certs-20220921220439-10174: {Name:mk045ddc97e52cc6fb76c850f85eeab9304c52af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:04:39.963510  228234 start.go:368] acquired machines lock for "embed-certs-20220921220439-10174" in 98.056µs
	I0921 22:04:39.963533  228234 start.go:93] Provisioning new machine with config: &{Name:embed-certs-20220921220439-10174 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:embed-certs-20220921220439-10174 Namespace:default APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0921 22:04:39.963620  228234 start.go:125] createHost starting for "" (driver="docker")
	I0921 22:04:39.965708  228234 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0921 22:04:39.965925  228234 start.go:159] libmachine.API.Create for "embed-certs-20220921220439-10174" (driver="docker")
	I0921 22:04:39.965955  228234 client.go:168] LocalClient.Create starting
	I0921 22:04:39.966006  228234 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem
	I0921 22:04:39.966035  228234 main.go:134] libmachine: Decoding PEM data...
	I0921 22:04:39.966052  228234 main.go:134] libmachine: Parsing certificate...
	I0921 22:04:39.966112  228234 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/cert.pem
	I0921 22:04:39.966131  228234 main.go:134] libmachine: Decoding PEM data...
	I0921 22:04:39.966144  228234 main.go:134] libmachine: Parsing certificate...
	I0921 22:04:39.966420  228234 cli_runner.go:164] Run: docker network inspect embed-certs-20220921220439-10174 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:04:39.988899  228234 cli_runner.go:211] docker network inspect embed-certs-20220921220439-10174 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:04:39.988977  228234 network_create.go:272] running [docker network inspect embed-certs-20220921220439-10174] to gather additional debugging logs...
	I0921 22:04:39.989006  228234 cli_runner.go:164] Run: docker network inspect embed-certs-20220921220439-10174
	W0921 22:04:40.013487  228234 cli_runner.go:211] docker network inspect embed-certs-20220921220439-10174 returned with exit code 1
	I0921 22:04:40.013524  228234 network_create.go:275] error running [docker network inspect embed-certs-20220921220439-10174]: docker network inspect embed-certs-20220921220439-10174: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: embed-certs-20220921220439-10174
	I0921 22:04:40.013541  228234 network_create.go:277] output of [docker network inspect embed-certs-20220921220439-10174]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: embed-certs-20220921220439-10174
	
	** /stderr **
	I0921 22:04:40.013606  228234 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 22:04:40.037792  228234 network.go:241] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-b7c23e57d062 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:a3:39:9d:03}}
	I0921 22:04:40.038716  228234 network.go:241] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-bfa8cb3d5f9b IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:8c:39:36:0c}}
	I0921 22:04:40.039640  228234 network.go:290] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.67.0:0xc000562268] misses:0}
	I0921 22:04:40.039674  228234 network.go:236] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 22:04:40.039686  228234 network_create.go:115] attempt to create docker network embed-certs-20220921220439-10174 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0921 22:04:40.039776  228234 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-20220921220439-10174 embed-certs-20220921220439-10174
	I0921 22:04:40.102182  228234 network_create.go:99] docker network embed-certs-20220921220439-10174 192.168.67.0/24 created
	I0921 22:04:40.102216  228234 kic.go:106] calculated static IP "192.168.67.2" for the "embed-certs-20220921220439-10174" container
	I0921 22:04:40.102285  228234 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0921 22:04:40.126530  228234 cli_runner.go:164] Run: docker volume create embed-certs-20220921220439-10174 --label name.minikube.sigs.k8s.io=embed-certs-20220921220439-10174 --label created_by.minikube.sigs.k8s.io=true
	I0921 22:04:40.150805  228234 oci.go:103] Successfully created a docker volume embed-certs-20220921220439-10174
	I0921 22:04:40.150913  228234 cli_runner.go:164] Run: docker run --rm --name embed-certs-20220921220439-10174-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-20220921220439-10174 --entrypoint /usr/bin/test -v embed-certs-20220921220439-10174:/var gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c -d /var/lib
	I0921 22:04:40.754589  228234 oci.go:107] Successfully prepared a docker volume embed-certs-20220921220439-10174
	I0921 22:04:40.754622  228234 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime containerd
	I0921 22:04:40.754641  228234 kic.go:179] Starting extracting preloaded images to volume ...
	I0921 22:04:40.754696  228234 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.2-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-20220921220439-10174:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c -I lz4 -xf /preloaded.tar -C /extractDir
	I0921 22:04:47.333565  228234 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.2-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-20220921220439-10174:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c -I lz4 -xf /preloaded.tar -C /extractDir: (6.578810446s)
	I0921 22:04:47.333595  228234 kic.go:188] duration metric: took 6.578952 seconds to extract preloaded images to volume
	W0921 22:04:47.333758  228234 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0921 22:04:47.333895  228234 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0921 22:04:47.428443  228234 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-20220921220439-10174 --name embed-certs-20220921220439-10174 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-20220921220439-10174 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-20220921220439-10174 --network embed-certs-20220921220439-10174 --ip 192.168.67.2 --volume embed-certs-20220921220439-10174:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c
	I0921 22:04:47.830285  228234 cli_runner.go:164] Run: docker container inspect embed-certs-20220921220439-10174 --format={{.State.Running}}
	I0921 22:04:47.857251  228234 cli_runner.go:164] Run: docker container inspect embed-certs-20220921220439-10174 --format={{.State.Status}}
	I0921 22:04:47.883151  228234 cli_runner.go:164] Run: docker exec embed-certs-20220921220439-10174 stat /var/lib/dpkg/alternatives/iptables
	I0921 22:04:47.949354  228234 oci.go:144] the created container "embed-certs-20220921220439-10174" has a running status.
	I0921 22:04:47.949387  228234 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/embed-certs-20220921220439-10174/id_rsa...
	I0921 22:04:48.041696  228234 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/embed-certs-20220921220439-10174/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0921 22:04:48.122356  228234 cli_runner.go:164] Run: docker container inspect embed-certs-20220921220439-10174 --format={{.State.Status}}
	I0921 22:04:48.157990  228234 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0921 22:04:48.158025  228234 kic_runner.go:114] Args: [docker exec --privileged embed-certs-20220921220439-10174 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0921 22:04:48.240260  228234 cli_runner.go:164] Run: docker container inspect embed-certs-20220921220439-10174 --format={{.State.Status}}
	I0921 22:04:48.268571  228234 machine.go:88] provisioning docker machine ...
	I0921 22:04:48.268605  228234 ubuntu.go:169] provisioning hostname "embed-certs-20220921220439-10174"
	I0921 22:04:48.268684  228234 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220439-10174
	I0921 22:04:48.298202  228234 main.go:134] libmachine: Using SSH client type: native
	I0921 22:04:48.298439  228234 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ec520] 0x7ef6a0 <nil>  [] 0s} 127.0.0.1 49398 <nil> <nil>}
	I0921 22:04:48.298465  228234 main.go:134] libmachine: About to run SSH command:
	sudo hostname embed-certs-20220921220439-10174 && echo "embed-certs-20220921220439-10174" | sudo tee /etc/hostname
	I0921 22:04:48.440942  228234 main.go:134] libmachine: SSH cmd err, output: <nil>: embed-certs-20220921220439-10174
	
	I0921 22:04:48.441005  228234 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220439-10174
	I0921 22:04:48.467972  228234 main.go:134] libmachine: Using SSH client type: native
	I0921 22:04:48.468154  228234 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ec520] 0x7ef6a0 <nil>  [] 0s} 127.0.0.1 49398 <nil> <nil>}
	I0921 22:04:48.468198  228234 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-20220921220439-10174' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-20220921220439-10174/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-20220921220439-10174' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0921 22:04:48.595883  228234 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0921 22:04:48.595917  228234 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube}
	I0921 22:04:48.595947  228234 ubuntu.go:177] setting up certificates
	I0921 22:04:48.595958  228234 provision.go:83] configureAuth start
	I0921 22:04:48.596010  228234 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220921220439-10174
	I0921 22:04:48.624313  228234 provision.go:138] copyHostCerts
	I0921 22:04:48.624375  228234 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cert.pem, removing ...
	I0921 22:04:48.624385  228234 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cert.pem
	I0921 22:04:48.624470  228234 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cert.pem (1123 bytes)
	I0921 22:04:48.624571  228234 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/key.pem, removing ...
	I0921 22:04:48.624586  228234 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/key.pem
	I0921 22:04:48.624630  228234 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/key.pem (1675 bytes)
	I0921 22:04:48.624699  228234 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.pem, removing ...
	I0921 22:04:48.624712  228234 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.pem
	I0921 22:04:48.624744  228234 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.pem (1078 bytes)
	I0921 22:04:48.624841  228234 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca-key.pem org=jenkins.embed-certs-20220921220439-10174 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-20220921220439-10174]
	I0921 22:04:48.859146  228234 provision.go:172] copyRemoteCerts
	I0921 22:04:48.859205  228234 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0921 22:04:48.859235  228234 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220439-10174
	I0921 22:04:48.885567  228234 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49398 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/embed-certs-20220921220439-10174/id_rsa Username:docker}
	I0921 22:04:48.983954  228234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0921 22:04:49.004381  228234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server.pem --> /etc/docker/server.pem (1269 bytes)
	I0921 22:04:49.023208  228234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0921 22:04:49.040903  228234 provision.go:86] duration metric: configureAuth took 444.930475ms
	I0921 22:04:49.040936  228234 ubuntu.go:193] setting minikube options for container-runtime
	I0921 22:04:49.041081  228234 config.go:180] Loaded profile config "embed-certs-20220921220439-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.2
	I0921 22:04:49.041093  228234 machine.go:91] provisioned docker machine in 772.502414ms
	I0921 22:04:49.041098  228234 client.go:171] LocalClient.Create took 9.075135608s
	I0921 22:04:49.041114  228234 start.go:167] duration metric: libmachine.API.Create for "embed-certs-20220921220439-10174" took 9.075189828s
	I0921 22:04:49.041123  228234 start.go:300] post-start starting for "embed-certs-20220921220439-10174" (driver="docker")
	I0921 22:04:49.041130  228234 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0921 22:04:49.041174  228234 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0921 22:04:49.041212  228234 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220439-10174
	I0921 22:04:49.066851  228234 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49398 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/embed-certs-20220921220439-10174/id_rsa Username:docker}
	I0921 22:04:49.158786  228234 ssh_runner.go:195] Run: cat /etc/os-release
	I0921 22:04:49.161293  228234 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0921 22:04:49.161317  228234 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0921 22:04:49.161332  228234 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0921 22:04:49.161339  228234 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0921 22:04:49.161355  228234 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/addons for local assets ...
	I0921 22:04:49.161400  228234 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files for local assets ...
	I0921 22:04:49.161479  228234 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/101742.pem -> 101742.pem in /etc/ssl/certs
	I0921 22:04:49.161577  228234 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0921 22:04:49.168112  228234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/101742.pem --> /etc/ssl/certs/101742.pem (1708 bytes)
	I0921 22:04:49.185174  228234 start.go:303] post-start completed in 144.03866ms
	I0921 22:04:49.185658  228234 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220921220439-10174
	I0921 22:04:49.210607  228234 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/embed-certs-20220921220439-10174/config.json ...
	I0921 22:04:49.210894  228234 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:04:49.210947  228234 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220439-10174
	I0921 22:04:49.236202  228234 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49398 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/embed-certs-20220921220439-10174/id_rsa Username:docker}
	I0921 22:04:49.323984  228234 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:04:49.327993  228234 start.go:128] duration metric: createHost completed in 9.364360046s
	I0921 22:04:49.328014  228234 start.go:83] releasing machines lock for "embed-certs-20220921220439-10174", held for 9.364491821s
	I0921 22:04:49.328080  228234 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220921220439-10174
	I0921 22:04:49.353300  228234 ssh_runner.go:195] Run: systemctl --version
	I0921 22:04:49.353339  228234 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0921 22:04:49.353376  228234 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220439-10174
	I0921 22:04:49.353412  228234 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220439-10174
	I0921 22:04:49.377158  228234 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49398 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/embed-certs-20220921220439-10174/id_rsa Username:docker}
	I0921 22:04:49.378883  228234 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49398 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/embed-certs-20220921220439-10174/id_rsa Username:docker}
	I0921 22:04:49.498157  228234 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0921 22:04:49.508272  228234 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0921 22:04:49.517405  228234 docker.go:188] disabling docker service ...
	I0921 22:04:49.517461  228234 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0921 22:04:49.536231  228234 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0921 22:04:49.545287  228234 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0921 22:04:49.624225  228234 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0921 22:04:49.700025  228234 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0921 22:04:49.709043  228234 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0921 22:04:49.722675  228234 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "registry.k8s.io/pause:3.8"|' -i /etc/containerd/config.toml"
	I0921 22:04:49.730469  228234 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0921 22:04:49.738466  228234 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0921 22:04:49.747124  228234 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0921 22:04:49.755570  228234 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0921 22:04:49.762521  228234 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0921 22:04:49.769181  228234 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0921 22:04:49.852587  228234 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0921 22:04:49.936591  228234 start.go:450] Will wait 60s for socket path /run/containerd/containerd.sock
	I0921 22:04:49.936665  228234 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0921 22:04:49.940243  228234 start.go:471] Will wait 60s for crictl version
	I0921 22:04:49.940296  228234 ssh_runner.go:195] Run: sudo crictl version
	I0921 22:04:49.967828  228234 start.go:480] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.8
	RuntimeApiVersion:  v1alpha2
	I0921 22:04:49.967897  228234 ssh_runner.go:195] Run: containerd --version
	I0921 22:04:49.998375  228234 ssh_runner.go:195] Run: containerd --version
	I0921 22:04:50.029956  228234 out.go:177] * Preparing Kubernetes v1.25.2 on containerd 1.6.8 ...
	I0921 22:04:50.031371  228234 cli_runner.go:164] Run: docker network inspect embed-certs-20220921220439-10174 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 22:04:50.054732  228234 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0921 22:04:50.057963  228234 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0921 22:04:50.067248  228234 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime containerd
	I0921 22:04:50.067306  228234 ssh_runner.go:195] Run: sudo crictl images --output json
	I0921 22:04:50.091694  228234 containerd.go:553] all images are preloaded for containerd runtime.
	I0921 22:04:50.091713  228234 containerd.go:467] Images already preloaded, skipping extraction
	I0921 22:04:50.091827  228234 ssh_runner.go:195] Run: sudo crictl images --output json
	I0921 22:04:50.116401  228234 containerd.go:553] all images are preloaded for containerd runtime.
	I0921 22:04:50.116422  228234 cache_images.go:84] Images are preloaded, skipping loading
	I0921 22:04:50.116465  228234 ssh_runner.go:195] Run: sudo crictl info
	I0921 22:04:50.139276  228234 cni.go:95] Creating CNI manager for ""
	I0921 22:04:50.139307  228234 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0921 22:04:50.139322  228234 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0921 22:04:50.139339  228234 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.25.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-20220921220439-10174 NodeName:embed-certs-20220921220439-10174 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/mi
nikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
	I0921 22:04:50.139519  228234 kubeadm.go:161] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "embed-certs-20220921220439-10174"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0921 22:04:50.139624  228234 kubeadm.go:962] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=embed-certs-20220921220439-10174 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.2 ClusterName:embed-certs-20220921220439-10174 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0921 22:04:50.139676  228234 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.2
	I0921 22:04:50.146463  228234 binaries.go:44] Found k8s binaries, skipping transfer
	I0921 22:04:50.146512  228234 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0921 22:04:50.153517  228234 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (525 bytes)
	I0921 22:04:50.165838  228234 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0921 22:04:50.178235  228234 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2061 bytes)
	I0921 22:04:50.189869  228234 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0921 22:04:50.192603  228234 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0921 22:04:50.201349  228234 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/embed-certs-20220921220439-10174 for IP: 192.168.67.2
	I0921 22:04:50.201439  228234 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.key
	I0921 22:04:50.201476  228234 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/proxy-client-ca.key
	I0921 22:04:50.201518  228234 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/embed-certs-20220921220439-10174/client.key
	I0921 22:04:50.201535  228234 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/embed-certs-20220921220439-10174/client.crt with IP's: []
	I0921 22:04:50.368865  228234 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/embed-certs-20220921220439-10174/client.crt ...
	I0921 22:04:50.368893  228234 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/embed-certs-20220921220439-10174/client.crt: {Name:mk226ea1db943207af02e5c53705f05570747631 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:04:50.369101  228234 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/embed-certs-20220921220439-10174/client.key ...
	I0921 22:04:50.369116  228234 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/embed-certs-20220921220439-10174/client.key: {Name:mk8a762893e501746b514cbb565551164b1cdfd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:04:50.369215  228234 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/embed-certs-20220921220439-10174/apiserver.key.c7fa3a9e
	I0921 22:04:50.369230  228234 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/embed-certs-20220921220439-10174/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0921 22:04:50.469613  228234 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/embed-certs-20220921220439-10174/apiserver.crt.c7fa3a9e ...
	I0921 22:04:50.469644  228234 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/embed-certs-20220921220439-10174/apiserver.crt.c7fa3a9e: {Name:mkeb79da00158597c717efbed13bf2957f89d893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:04:50.469869  228234 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/embed-certs-20220921220439-10174/apiserver.key.c7fa3a9e ...
	I0921 22:04:50.469887  228234 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/embed-certs-20220921220439-10174/apiserver.key.c7fa3a9e: {Name:mk903a1dce0ba280c8749746a57b2c1f12cf0fa2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:04:50.470015  228234 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/embed-certs-20220921220439-10174/apiserver.crt.c7fa3a9e -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/embed-certs-20220921220439-10174/apiserver.crt
	I0921 22:04:50.470080  228234 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/embed-certs-20220921220439-10174/apiserver.key.c7fa3a9e -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/embed-certs-20220921220439-10174/apiserver.key
	I0921 22:04:50.470128  228234 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/embed-certs-20220921220439-10174/proxy-client.key
	I0921 22:04:50.470143  228234 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/embed-certs-20220921220439-10174/proxy-client.crt with IP's: []
	I0921 22:04:50.969711  228234 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/embed-certs-20220921220439-10174/proxy-client.crt ...
	I0921 22:04:50.969747  228234 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/embed-certs-20220921220439-10174/proxy-client.crt: {Name:mkc9e16efe729c68e56d67f05e346fb835c5813d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:04:50.969967  228234 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/embed-certs-20220921220439-10174/proxy-client.key ...
	I0921 22:04:50.969985  228234 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/embed-certs-20220921220439-10174/proxy-client.key: {Name:mk5c76af13224a56431e10d32c235b07df0aea5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:04:50.970236  228234 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/10174.pem (1338 bytes)
	W0921 22:04:50.970286  228234 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/10174_empty.pem, impossibly tiny 0 bytes
	I0921 22:04:50.970307  228234 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca-key.pem (1679 bytes)
	I0921 22:04:50.970338  228234 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem (1078 bytes)
	I0921 22:04:50.970375  228234 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/cert.pem (1123 bytes)
	I0921 22:04:50.970412  228234 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/key.pem (1675 bytes)
	I0921 22:04:50.970465  228234 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/101742.pem (1708 bytes)
	I0921 22:04:50.970976  228234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/embed-certs-20220921220439-10174/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0921 22:04:50.989272  228234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/embed-certs-20220921220439-10174/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0921 22:04:51.007111  228234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/embed-certs-20220921220439-10174/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0921 22:04:51.024187  228234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/embed-certs-20220921220439-10174/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0921 22:04:51.040826  228234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0921 22:04:51.058119  228234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0921 22:04:51.077576  228234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0921 22:04:51.094533  228234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0921 22:04:51.111565  228234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/101742.pem --> /usr/share/ca-certificates/101742.pem (1708 bytes)
	I0921 22:04:51.128119  228234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0921 22:04:51.144985  228234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/10174.pem --> /usr/share/ca-certificates/10174.pem (1338 bytes)
	I0921 22:04:51.162466  228234 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0921 22:04:51.175087  228234 ssh_runner.go:195] Run: openssl version
	I0921 22:04:51.180074  228234 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10174.pem && ln -fs /usr/share/ca-certificates/10174.pem /etc/ssl/certs/10174.pem"
	I0921 22:04:51.187298  228234 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10174.pem
	I0921 22:04:51.190257  228234 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Sep 21 21:32 /usr/share/ca-certificates/10174.pem
	I0921 22:04:51.190310  228234 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10174.pem
	I0921 22:04:51.194896  228234 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10174.pem /etc/ssl/certs/51391683.0"
	I0921 22:04:51.201954  228234 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/101742.pem && ln -fs /usr/share/ca-certificates/101742.pem /etc/ssl/certs/101742.pem"
	I0921 22:04:51.209310  228234 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/101742.pem
	I0921 22:04:51.212480  228234 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Sep 21 21:32 /usr/share/ca-certificates/101742.pem
	I0921 22:04:51.212531  228234 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/101742.pem
	I0921 22:04:51.217351  228234 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/101742.pem /etc/ssl/certs/3ec20f2e.0"
	I0921 22:04:51.224284  228234 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0921 22:04:51.231020  228234 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0921 22:04:51.233972  228234 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Sep 21 21:28 /usr/share/ca-certificates/minikubeCA.pem
	I0921 22:04:51.234003  228234 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0921 22:04:51.238499  228234 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0921 22:04:51.245389  228234 kubeadm.go:396] StartCluster: {Name:embed-certs-20220921220439-10174 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:embed-certs-20220921220439-10174 Namespace:default APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 22:04:51.245460  228234 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0921 22:04:51.245490  228234 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0921 22:04:51.269823  228234 cri.go:87] found id: ""
	I0921 22:04:51.269887  228234 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0921 22:04:51.277067  228234 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0921 22:04:51.284134  228234 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0921 22:04:51.284190  228234 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0921 22:04:51.291143  228234 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0921 22:04:51.291188  228234 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0921 22:04:51.335167  228234 kubeadm.go:317] [init] Using Kubernetes version: v1.25.2
	I0921 22:04:51.335275  228234 kubeadm.go:317] [preflight] Running pre-flight checks
	I0921 22:04:51.363535  228234 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I0921 22:04:51.363658  228234 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1017-gcp
	I0921 22:04:51.363795  228234 kubeadm.go:317] OS: Linux
	I0921 22:04:51.363870  228234 kubeadm.go:317] CGROUPS_CPU: enabled
	I0921 22:04:51.363933  228234 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I0921 22:04:51.363995  228234 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I0921 22:04:51.364058  228234 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I0921 22:04:51.364122  228234 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I0921 22:04:51.364189  228234 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I0921 22:04:51.364234  228234 kubeadm.go:317] CGROUPS_PIDS: enabled
	I0921 22:04:51.364276  228234 kubeadm.go:317] CGROUPS_HUGETLB: enabled
	I0921 22:04:51.364317  228234 kubeadm.go:317] CGROUPS_BLKIO: enabled
	I0921 22:04:51.428458  228234 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0921 22:04:51.428593  228234 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0921 22:04:51.428726  228234 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0921 22:04:51.544509  228234 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0921 22:04:51.547910  228234 out.go:204]   - Generating certificates and keys ...
	I0921 22:04:51.548031  228234 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0921 22:04:51.548114  228234 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0921 22:04:51.775447  228234 kubeadm.go:317] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0921 22:04:52.081507  228234 kubeadm.go:317] [certs] Generating "front-proxy-ca" certificate and key
	I0921 22:04:52.147940  228234 kubeadm.go:317] [certs] Generating "front-proxy-client" certificate and key
	I0921 22:04:52.716722  228234 kubeadm.go:317] [certs] Generating "etcd/ca" certificate and key
	I0921 22:04:52.797955  228234 kubeadm.go:317] [certs] Generating "etcd/server" certificate and key
	I0921 22:04:52.798201  228234 kubeadm.go:317] [certs] etcd/server serving cert is signed for DNS names [embed-certs-20220921220439-10174 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0921 22:04:52.988500  228234 kubeadm.go:317] [certs] Generating "etcd/peer" certificate and key
	I0921 22:04:52.988697  228234 kubeadm.go:317] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-20220921220439-10174 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0921 22:04:53.198727  228234 kubeadm.go:317] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0921 22:04:53.315115  228234 kubeadm.go:317] [certs] Generating "apiserver-etcd-client" certificate and key
	I0921 22:04:53.500608  228234 kubeadm.go:317] [certs] Generating "sa" key and public key
	I0921 22:04:53.500784  228234 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0921 22:04:53.648199  228234 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0921 22:04:53.732418  228234 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0921 22:04:54.016649  228234 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0921 22:04:54.164894  228234 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0921 22:04:54.176354  228234 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0921 22:04:54.177164  228234 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0921 22:04:54.177233  228234 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I0921 22:04:54.260780  228234 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0921 22:04:54.264252  228234 out.go:204]   - Booting up control plane ...
	I0921 22:04:54.264423  228234 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0921 22:04:54.264518  228234 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0921 22:04:54.265182  228234 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0921 22:04:54.266957  228234 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0921 22:04:54.268968  228234 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0921 22:05:00.771866  228234 kubeadm.go:317] [apiclient] All control plane components are healthy after 6.502825 seconds
	I0921 22:05:00.772020  228234 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0921 22:05:00.782129  228234 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0921 22:05:01.301482  228234 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs
	I0921 22:05:01.301779  228234 kubeadm.go:317] [mark-control-plane] Marking the node embed-certs-20220921220439-10174 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0921 22:05:01.809279  228234 kubeadm.go:317] [bootstrap-token] Using token: 7puybp.umpz069ulwxpc1o9
	I0921 22:05:01.811166  228234 out.go:204]   - Configuring RBAC rules ...
	I0921 22:05:01.811295  228234 kubeadm.go:317] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0921 22:05:01.813850  228234 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0921 22:05:01.818255  228234 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0921 22:05:01.820463  228234 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0921 22:05:01.822501  228234 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0921 22:05:01.824560  228234 kubeadm.go:317] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0921 22:05:01.834263  228234 kubeadm.go:317] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0921 22:05:02.036741  228234 kubeadm.go:317] [addons] Applied essential addon: CoreDNS
	I0921 22:05:02.217724  228234 kubeadm.go:317] [addons] Applied essential addon: kube-proxy
	I0921 22:05:02.218817  228234 kubeadm.go:317] 
	I0921 22:05:02.218931  228234 kubeadm.go:317] Your Kubernetes control-plane has initialized successfully!
	I0921 22:05:02.218955  228234 kubeadm.go:317] 
	I0921 22:05:02.219036  228234 kubeadm.go:317] To start using your cluster, you need to run the following as a regular user:
	I0921 22:05:02.219049  228234 kubeadm.go:317] 
	I0921 22:05:02.219075  228234 kubeadm.go:317]   mkdir -p $HOME/.kube
	I0921 22:05:02.219144  228234 kubeadm.go:317]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0921 22:05:02.219210  228234 kubeadm.go:317]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0921 22:05:02.219221  228234 kubeadm.go:317] 
	I0921 22:05:02.219275  228234 kubeadm.go:317] Alternatively, if you are the root user, you can run:
	I0921 22:05:02.219281  228234 kubeadm.go:317] 
	I0921 22:05:02.219330  228234 kubeadm.go:317]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0921 22:05:02.219335  228234 kubeadm.go:317] 
	I0921 22:05:02.219388  228234 kubeadm.go:317] You should now deploy a pod network to the cluster.
	I0921 22:05:02.219522  228234 kubeadm.go:317] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0921 22:05:02.219634  228234 kubeadm.go:317]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0921 22:05:02.219663  228234 kubeadm.go:317] 
	I0921 22:05:02.219858  228234 kubeadm.go:317] You can now join any number of control-plane nodes by copying certificate authorities
	I0921 22:05:02.219960  228234 kubeadm.go:317] and service account keys on each node and then running the following as root:
	I0921 22:05:02.219975  228234 kubeadm.go:317] 
	I0921 22:05:02.220086  228234 kubeadm.go:317]   kubeadm join control-plane.minikube.internal:8443 --token 7puybp.umpz069ulwxpc1o9 \
	I0921 22:05:02.220231  228234 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:b419e756666ce2e30bdac31039f21ad7242d26e2b9c03a55c5bc6fe8dcfddcf7 \
	I0921 22:05:02.220264  228234 kubeadm.go:317] 	--control-plane 
	I0921 22:05:02.220275  228234 kubeadm.go:317] 
	I0921 22:05:02.220368  228234 kubeadm.go:317] Then you can join any number of worker nodes by running the following on each as root:
	I0921 22:05:02.220378  228234 kubeadm.go:317] 
	I0921 22:05:02.220521  228234 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8443 --token 7puybp.umpz069ulwxpc1o9 \
	I0921 22:05:02.220653  228234 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:b419e756666ce2e30bdac31039f21ad7242d26e2b9c03a55c5bc6fe8dcfddcf7 
	I0921 22:05:02.222763  228234 kubeadm.go:317] W0921 22:04:51.327079     740 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0921 22:05:02.223039  228234 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1017-gcp\n", err: exit status 1
	I0921 22:05:02.223183  228234 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0921 22:05:02.223229  228234 cni.go:95] Creating CNI manager for ""
	I0921 22:05:02.223251  228234 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0921 22:05:02.225872  228234 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0921 22:05:02.227244  228234 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0921 22:05:02.280693  228234 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.2/kubectl ...
	I0921 22:05:02.280726  228234 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0921 22:05:02.297573  228234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0921 22:05:03.104451  228234 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0921 22:05:03.104510  228234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:05:03.104572  228234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl label nodes minikube.k8s.io/version=v1.27.0 minikube.k8s.io/commit=937c68716dfaac5b5ffa3b6655158d5d3472b8c4 minikube.k8s.io/name=embed-certs-20220921220439-10174 minikube.k8s.io/updated_at=2022_09_21T22_05_03_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:05:03.196495  228234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:05:03.196520  228234 ops.go:34] apiserver oom_adj: -16
	I0921 22:05:03.758805  228234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:05:04.259369  228234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:05:04.759108  228234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:05:05.259445  228234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:05:05.759317  228234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:05:06.259048  228234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:05:06.758516  228234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:05:07.258868  228234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:05:07.758991  228234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:05:08.259415  228234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:05:08.758600  228234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:05:09.258700  228234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:05:09.759102  228234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:05:10.258497  228234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:05:10.759267  228234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:05:11.259014  228234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:05:11.759195  228234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:05:12.259428  228234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:05:12.758835  228234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:05:13.258775  228234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:05:13.759031  228234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:05:14.258444  228234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:05:14.759071  228234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:05:14.889584  228234 kubeadm.go:1067] duration metric: took 11.785123801s to wait for elevateKubeSystemPrivileges.
	I0921 22:05:14.889625  228234 kubeadm.go:398] StartCluster complete in 23.644242354s
	I0921 22:05:14.889648  228234 settings.go:142] acquiring lock: {Name:mk2e017b0c75e33bad2b3a546d00214b38a21694 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:05:14.889778  228234 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
	I0921 22:05:14.891673  228234 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig: {Name:mkdec5faf1bb182b50a3cd5458b352f38075dc15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:05:15.408214  228234 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "embed-certs-20220921220439-10174" rescaled to 1
	I0921 22:05:15.408265  228234 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0921 22:05:15.410439  228234 out.go:177] * Verifying Kubernetes components...
	I0921 22:05:15.408326  228234 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0921 22:05:15.408347  228234 addons.go:412] enableAddons start: toEnable=map[], additional=[]
	I0921 22:05:15.408499  228234 config.go:180] Loaded profile config "embed-certs-20220921220439-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.2
	I0921 22:05:15.411813  228234 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0921 22:05:15.411852  228234 addons.go:65] Setting storage-provisioner=true in profile "embed-certs-20220921220439-10174"
	I0921 22:05:15.411887  228234 addons.go:153] Setting addon storage-provisioner=true in "embed-certs-20220921220439-10174"
	W0921 22:05:15.411900  228234 addons.go:162] addon storage-provisioner should already be in state true
	I0921 22:05:15.411856  228234 addons.go:65] Setting default-storageclass=true in profile "embed-certs-20220921220439-10174"
	I0921 22:05:15.411965  228234 host.go:66] Checking if "embed-certs-20220921220439-10174" exists ...
	I0921 22:05:15.411974  228234 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-20220921220439-10174"
	I0921 22:05:15.412327  228234 cli_runner.go:164] Run: docker container inspect embed-certs-20220921220439-10174 --format={{.State.Status}}
	I0921 22:05:15.412524  228234 cli_runner.go:164] Run: docker container inspect embed-certs-20220921220439-10174 --format={{.State.Status}}
	I0921 22:05:15.451509  228234 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0921 22:05:15.453217  228234 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0921 22:05:15.453243  228234 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0921 22:05:15.453320  228234 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220439-10174
	I0921 22:05:15.476833  228234 addons.go:153] Setting addon default-storageclass=true in "embed-certs-20220921220439-10174"
	W0921 22:05:15.476870  228234 addons.go:162] addon default-storageclass should already be in state true
	I0921 22:05:15.476903  228234 host.go:66] Checking if "embed-certs-20220921220439-10174" exists ...
	I0921 22:05:15.477414  228234 cli_runner.go:164] Run: docker container inspect embed-certs-20220921220439-10174 --format={{.State.Status}}
	I0921 22:05:15.480867  228234 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49398 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/embed-certs-20220921220439-10174/id_rsa Username:docker}
	I0921 22:05:15.512478  228234 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0921 22:05:15.512510  228234 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0921 22:05:15.512567  228234 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220439-10174
	I0921 22:05:15.515584  228234 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0921 22:05:15.516989  228234 node_ready.go:35] waiting up to 6m0s for node "embed-certs-20220921220439-10174" to be "Ready" ...
	I0921 22:05:15.540859  228234 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49398 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/embed-certs-20220921220439-10174/id_rsa Username:docker}
	I0921 22:05:15.596517  228234 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0921 22:05:15.778327  228234 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0921 22:05:15.917027  228234 start.go:810] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS
	I0921 22:05:16.117818  228234 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0921 22:05:16.119052  228234 addons.go:414] enableAddons completed in 710.712984ms
	I0921 22:05:17.524049  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:05:20.023426  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:05:22.023535  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:05:24.523235  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:05:26.523732  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:05:28.524238  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:05:31.023626  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:05:33.523388  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:05:35.523733  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:05:38.024232  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:05:40.524052  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:05:43.023460  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:05:45.523401  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:05:48.023938  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:05:50.024228  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:05:52.523732  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:05:55.023366  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:05:57.523881  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:06:00.023469  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:06:02.024251  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:06:04.524453  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:06:07.023554  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:06:09.023587  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:06:11.523490  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:06:13.524167  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:06:16.023496  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:06:18.025602  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:06:20.524978  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:06:23.023178  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:06:25.023663  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:06:27.024414  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:06:29.524551  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:06:32.023408  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:06:34.524338  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:06:37.023144  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:06:39.023521  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:06:41.024317  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:06:43.524368  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:06:46.023452  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:06:48.023644  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:06:50.024154  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:06:52.523404  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:06:55.023550  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:06:57.524301  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:07:00.023193  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:07:02.023468  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:07:04.023825  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:07:06.524222  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:07:09.023363  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:07:11.523606  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:07:13.523702  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:07:16.023258  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:07:18.023543  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:07:20.023886  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:07:22.026168  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:07:24.523470  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:07:26.523545  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:07:28.523651  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:07:30.524297  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:07:32.524576  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:07:35.023635  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:07:37.523843  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:07:39.523947  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:07:42.023354  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:07:44.023462  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:07:46.523889  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:07:49.023065  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:07:51.024322  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:07:53.523230  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:07:55.523264  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:07:57.524211  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:08:00.023865  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:08:02.024139  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:08:04.024204  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:08:06.524035  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:08:08.524251  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:08:11.023053  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:08:13.023527  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:08:15.024220  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:08:17.523858  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:08:20.023496  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:08:22.023525  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:08:24.524255  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:08:27.023513  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:08:29.523489  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:08:31.523644  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:08:33.524152  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:08:36.023278  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:08:38.023537  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:08:40.023953  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:08:42.523855  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:08:44.524143  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:08:47.024474  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:08:49.523436  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:08:51.523513  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:08:53.523787  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:08:56.024197  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:08:58.523874  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:09:01.023479  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:09:03.523345  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:09:05.525556  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:09:08.023332  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:09:10.023391  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:09:12.023546  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:09:14.524173  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:09:15.526233  228234 node_ready.go:38] duration metric: took 4m0.009209878s waiting for node "embed-certs-20220921220439-10174" to be "Ready" ...
	I0921 22:09:15.529629  228234 out.go:177] 
	W0921 22:09:15.531085  228234 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0921 22:09:15.531107  228234 out.go:239] * 
	* 
	W0921 22:09:15.532218  228234 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0921 22:09:15.534409  228234 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p embed-certs-20220921220439-10174 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220921220439-10174
helpers_test.go:235: (dbg) docker inspect embed-certs-20220921220439-10174:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0efc3a0310481f73b6cd48b12157774b33f818f56057de5f30c303eafdd6c31a",
	        "Created": "2022-09-21T22:04:47.451918435Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 229029,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-09-21T22:04:47.821915918Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f58fddaff4349397c9f51a6b73926a9b118af22b4ccb4492e84c74d0b59dcd4",
	        "ResolvConfPath": "/var/lib/docker/containers/0efc3a0310481f73b6cd48b12157774b33f818f56057de5f30c303eafdd6c31a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0efc3a0310481f73b6cd48b12157774b33f818f56057de5f30c303eafdd6c31a/hostname",
	        "HostsPath": "/var/lib/docker/containers/0efc3a0310481f73b6cd48b12157774b33f818f56057de5f30c303eafdd6c31a/hosts",
	        "LogPath": "/var/lib/docker/containers/0efc3a0310481f73b6cd48b12157774b33f818f56057de5f30c303eafdd6c31a/0efc3a0310481f73b6cd48b12157774b33f818f56057de5f30c303eafdd6c31a-json.log",
	        "Name": "/embed-certs-20220921220439-10174",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "embed-certs-20220921220439-10174:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-20220921220439-10174",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/4c8238508dd48885d3bcd481ab9754fae874a05a7cd3b8c92540515918045fea-init/diff:/var/lib/docker/overlay2/e464f9a636cb97832b8aa5685163afb26b14d037782877c3831e0f261b6fb12b/diff:/var/lib/docker/overlay2/6f2f230003f0711dfcff3390931e14f9abd39be1cd2b674079cb6a0fc99dab7f/diff:/var/lib/docker/overlay2/a539506727a1471a83b57694b6923a137699d9d47fe00601ffa27c34d63d17c2/diff:/var/lib/docker/overlay2/7dc48fda616494e9959c05191521a1f5bdf6be0555a35fc1d7ce85b87a0522ad/diff:/var/lib/docker/overlay2/6a1c06da85f8c03c9712693ca8a75d804e7b9f135a6b92e3929385cc791b0b3a/diff:/var/lib/docker/overlay2/d178ee6cf0c027c7943422dc71b38eb93ee131daede423f6cf283424eaebc517/diff:/var/lib/docker/overlay2/f4c8d81a89934a29db99d9ed4b4542a8882465ab2b6419ba4a3fe09214d25d9e/diff:/var/lib/docker/overlay2/4411575e37772974371f716b7c4abb3053138ddcaf53d883eb3116b4c3a37f78/diff:/var/lib/docker/overlay2/a3f8db0c0d790efcede2f41e470d293e280b022ff0bbb46c8bc25f190e3146fc/diff:/var/lib/docker/overlay2/a5bfd3
703378d6d5b2622598a13b4b2eb9203659c1ffff9aa93944ef8d20d22c/diff:/var/lib/docker/overlay2/3c24c9c8a37c1a488d12209df991b3857ccbc448a3329e46a8d48fc28ef81b83/diff:/var/lib/docker/overlay2/199771963b33bc809e912a6d428c556897f0ba04db2268e08695cb7ce0ee3bad/diff:/var/lib/docker/overlay2/4bc13570e456a5d261fbaf68140f2e5b2ea10c6683623858e16ee3ed1b117ef7/diff:/var/lib/docker/overlay2/9831fdfd44eff7e3f4780d10f5480f090b388735bb188e7a1e93251b7769d8f3/diff:/var/lib/docker/overlay2/28126fe31ddfbf99d4588c5ac750503b9b6cee8f2e00781941cff5f4323d1c76/diff:/var/lib/docker/overlay2/173221e8ff5f6ec22870429dc8bb01272ade638fe47275d1daa1ce07952c8c8c/diff:/var/lib/docker/overlay2/53bc2e4fa25c7c694765cf4d57e59552b26eed732ba59b8d74b221ea0ada7479/diff:/var/lib/docker/overlay2/0175a1cb7e1c92247b6cbac270da49d811d3508e193c1c2e47bef8cb769a47bf/diff:/var/lib/docker/overlay2/1651afeb98cf83af553f3e0a4dcb564af84cb440147702fddc0133cdae08e879/diff:/var/lib/docker/overlay2/64c0563be8f8902fe0e9d207f839be98da0e23d6d839403a7e5498f0468e6b75/diff:/var/lib/d
ocker/overlay2/c9ce1f7c22e681e0e279ce7656144b4c75e2d24f7297acad69e621f2756b75e8/diff:/var/lib/docker/overlay2/469e6b8bbc078383af3dc64c254906774ab56a833c4de1dd39b46bfa27fee3b0/diff:/var/lib/docker/overlay2/797536b923ada4261904843a51188063c401089eae5fec971f1dfb3c21d000a8/diff:/var/lib/docker/overlay2/9d5cbab97d75347795fc5814806362514e12ffc998b48411ecf1d23f980badf6/diff:/var/lib/docker/overlay2/8681f41e3df379cfdc8fd52b2e5837512b8e36a64c87fb159548a876d16e691d/diff:/var/lib/docker/overlay2/78ea1917a0aca968f65d796cbc2ee95d7d190a700496b9e47b29884fe13b1bec/diff:/var/lib/docker/overlay2/c3c231a74b7992f1fdc04b623c10d7791eab4fd97c803ed2610d595097c682c2/diff:/var/lib/docker/overlay2/f7b38a2a87d3958318f3a4b244e33d8096cd001a8fcb916eec022b0679382900/diff:/var/lib/docker/overlay2/1107be3397c1dfc460decaf307982d427cc431beb7938fb0e78f4f7cbc0afd3b/diff:/var/lib/docker/overlay2/59319d0c4cc381deff6e51ba1cc718b7cfd4fc557e74b8e83bd228afca501c8c/diff:/var/lib/docker/overlay2/9d031408a1ab9825246b7bba81db2596cb92e70fbedc70fba9be1d25320
8529f/diff:/var/lib/docker/overlay2/5800b717c814431baf3cc17520b099e7e011915cdeb46ed6360ec2f831a926ec/diff:/var/lib/docker/overlay2/6660d8dfc6dbb7f41f871ff7ec7e0fdfdb2ca3480141cdd4cce92869cb5382bf/diff:/var/lib/docker/overlay2/525a566b79110ff18e78880f2652b8d9cfcfbdbf3272fedb650933fcbbf869bd/diff:/var/lib/docker/overlay2/a0730aa8e0952fc661048d3a7f986ff246a5b2a29ad4e8195970bb407e3cecfc/diff:/var/lib/docker/overlay2/f9d25765ae914f5f013b5f46292f03c39d82f675d2102904f5fabecf07189337/diff:/var/lib/docker/overlay2/b7b34d126964ff60bc0fb625da7343e53e8bb4e77d5e72e33257c28b3d395ca3/diff:/var/lib/docker/overlay2/8514d6cc3775ebdd86f683a78503419bc97346c0059ac0d48563d2edec9727f0/diff:/var/lib/docker/overlay2/dd6f7bead5718d42040b4ee90ce09fa15720d04f3c58e2964b1d5b241bf5b7ed/diff:/var/lib/docker/overlay2/1a8adeb0355a42e9284e241b527930f4f6b7108d459c10550d5d0c4451f9e924/diff:/var/lib/docker/overlay2/703991449e0070d93945e4435da56144ce54e461e877e0b084144663d46b10f6/diff:/var/lib/docker/overlay2/65855711cfa445d374537e6268fd1bea2f62ea
bf2f590842933254b4755050dd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4c8238508dd48885d3bcd481ab9754fae874a05a7cd3b8c92540515918045fea/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4c8238508dd48885d3bcd481ab9754fae874a05a7cd3b8c92540515918045fea/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4c8238508dd48885d3bcd481ab9754fae874a05a7cd3b8c92540515918045fea/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-20220921220439-10174",
	                "Source": "/var/lib/docker/volumes/embed-certs-20220921220439-10174/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-20220921220439-10174",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-20220921220439-10174",
	                "name.minikube.sigs.k8s.io": "embed-certs-20220921220439-10174",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9eafa65cab570427f54e672c314a2de414b922ec2d5c452fa77eb94dc7c53c9e",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49398"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49397"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49394"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49396"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49395"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/9eafa65cab57",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-20220921220439-10174": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "0efc3a031048",
	                        "embed-certs-20220921220439-10174"
	                    ],
	                    "NetworkID": "e71aa30fd3ace87130e43e4abce1f2566d43d95c3b2e37ab1594e3c5a105c1bc",
	                    "EndpointID": "e12f2a7ae893a2d247b22ed045ec225e1db5924afdba9eb642a202517e80b83a",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220921220439-10174 -n embed-certs-20220921220439-10174
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/FirstStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-20220921220439-10174 logs -n 25
helpers_test.go:252: TestStartStop/group/embed-certs/serial/FirstStart logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------|--------------------------------------------|---------|---------|---------------------|---------------------|
	| Command |                    Args                    |                  Profile                   |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------|--------------------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p pause-20220921215721-10174              | pause-20220921215721-10174                 | jenkins | v1.27.0 | 21 Sep 22 21:58 UTC | 21 Sep 22 21:58 UTC |
	|         | --alsologtostderr -v=5                     |                                            |         |         |                     |                     |
	| profile | list --output json                         | minikube                                   | jenkins | v1.27.0 | 21 Sep 22 21:58 UTC | 21 Sep 22 21:58 UTC |
	| delete  | -p pause-20220921215721-10174              | pause-20220921215721-10174                 | jenkins | v1.27.0 | 21 Sep 22 21:58 UTC | 21 Sep 22 21:58 UTC |
	| start   | -p                                         | kindnet-20220921215523-10174               | jenkins | v1.27.0 | 21 Sep 22 21:58 UTC | 21 Sep 22 21:59 UTC |
	|         | kindnet-20220921215523-10174               |                                            |         |         |                     |                     |
	|         | --memory=2048                              |                                            |         |         |                     |                     |
	|         | --alsologtostderr                          |                                            |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m              |                                            |         |         |                     |                     |
	|         | --cni=kindnet --driver=docker              |                                            |         |         |                     |                     |
	|         | --container-runtime=containerd             |                                            |         |         |                     |                     |
	| start   | -p                                         | cert-expiration-20220921215524-10174       | jenkins | v1.27.0 | 21 Sep 22 21:58 UTC | 21 Sep 22 21:59 UTC |
	|         | cert-expiration-20220921215524-10174       |                                            |         |         |                     |                     |
	|         | --memory=2048                              |                                            |         |         |                     |                     |
	|         | --cert-expiration=8760h                    |                                            |         |         |                     |                     |
	|         | --driver=docker                            |                                            |         |         |                     |                     |
	|         | --container-runtime=containerd             |                                            |         |         |                     |                     |
	| delete  | -p                                         | cert-expiration-20220921215524-10174       | jenkins | v1.27.0 | 21 Sep 22 21:59 UTC | 21 Sep 22 21:59 UTC |
	|         | cert-expiration-20220921215524-10174       |                                            |         |         |                     |                     |
	| start   | -p cilium-20220921215524-10174             | cilium-20220921215524-10174                | jenkins | v1.27.0 | 21 Sep 22 21:59 UTC | 21 Sep 22 22:01 UTC |
	|         | --memory=2048                              |                                            |         |         |                     |                     |
	|         | --alsologtostderr                          |                                            |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m              |                                            |         |         |                     |                     |
	|         | --cni=cilium --driver=docker               |                                            |         |         |                     |                     |
	|         | --container-runtime=containerd             |                                            |         |         |                     |                     |
	| ssh     | -p auto-20220921215523-10174               | auto-20220921215523-10174                  | jenkins | v1.27.0 | 21 Sep 22 21:59 UTC | 21 Sep 22 21:59 UTC |
	|         | pgrep -a kubelet                           |                                            |         |         |                     |                     |
	| delete  | -p auto-20220921215523-10174               | auto-20220921215523-10174                  | jenkins | v1.27.0 | 21 Sep 22 21:59 UTC | 21 Sep 22 21:59 UTC |
	| start   | -p calico-20220921215524-10174             | calico-20220921215524-10174                | jenkins | v1.27.0 | 21 Sep 22 21:59 UTC |                     |
	|         | --memory=2048                              |                                            |         |         |                     |                     |
	|         | --alsologtostderr                          |                                            |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m              |                                            |         |         |                     |                     |
	|         | --cni=calico --driver=docker               |                                            |         |         |                     |                     |
	|         | --container-runtime=containerd             |                                            |         |         |                     |                     |
	| ssh     | -p                                         | kindnet-20220921215523-10174               | jenkins | v1.27.0 | 21 Sep 22 21:59 UTC | 21 Sep 22 21:59 UTC |
	|         | kindnet-20220921215523-10174               |                                            |         |         |                     |                     |
	|         | pgrep -a kubelet                           |                                            |         |         |                     |                     |
	| delete  | -p                                         | kindnet-20220921215523-10174               | jenkins | v1.27.0 | 21 Sep 22 21:59 UTC | 21 Sep 22 21:59 UTC |
	|         | kindnet-20220921215523-10174               |                                            |         |         |                     |                     |
	| start   | -p                                         | enable-default-cni-20220921215523-10174    | jenkins | v1.27.0 | 21 Sep 22 21:59 UTC | 21 Sep 22 22:04 UTC |
	|         | enable-default-cni-20220921215523-10174    |                                            |         |         |                     |                     |
	|         | --memory=2048 --alsologtostderr            |                                            |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m              |                                            |         |         |                     |                     |
	|         | --enable-default-cni=true                  |                                            |         |         |                     |                     |
	|         | --driver=docker                            |                                            |         |         |                     |                     |
	|         | --container-runtime=containerd             |                                            |         |         |                     |                     |
	| ssh     | -p cilium-20220921215524-10174             | cilium-20220921215524-10174                | jenkins | v1.27.0 | 21 Sep 22 22:01 UTC | 21 Sep 22 22:01 UTC |
	|         | pgrep -a kubelet                           |                                            |         |         |                     |                     |
	| delete  | -p cilium-20220921215524-10174             | cilium-20220921215524-10174                | jenkins | v1.27.0 | 21 Sep 22 22:01 UTC | 21 Sep 22 22:01 UTC |
	| start   | -p bridge-20220921215523-10174             | bridge-20220921215523-10174                | jenkins | v1.27.0 | 21 Sep 22 22:01 UTC | 21 Sep 22 22:01 UTC |
	|         | --memory=2048                              |                                            |         |         |                     |                     |
	|         | --alsologtostderr                          |                                            |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m              |                                            |         |         |                     |                     |
	|         | --cni=bridge --driver=docker               |                                            |         |         |                     |                     |
	|         | --container-runtime=containerd             |                                            |         |         |                     |                     |
	| ssh     | -p bridge-20220921215523-10174             | bridge-20220921215523-10174                | jenkins | v1.27.0 | 21 Sep 22 22:01 UTC | 21 Sep 22 22:01 UTC |
	|         | pgrep -a kubelet                           |                                            |         |         |                     |                     |
	| delete  | -p                                         | kubernetes-upgrade-20220921215522-10174    | jenkins | v1.27.0 | 21 Sep 22 22:04 UTC | 21 Sep 22 22:04 UTC |
	|         | kubernetes-upgrade-20220921215522-10174    |                                            |         |         |                     |                     |
	| start   | -p                                         | embed-certs-20220921220439-10174           | jenkins | v1.27.0 | 21 Sep 22 22:04 UTC |                     |
	|         | embed-certs-20220921220439-10174           |                                            |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr            |                                            |         |         |                     |                     |
	|         | --wait=true --embed-certs                  |                                            |         |         |                     |                     |
	|         | --driver=docker                            |                                            |         |         |                     |                     |
	|         | --container-runtime=containerd             |                                            |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.2               |                                            |         |         |                     |                     |
	| ssh     | -p                                         | enable-default-cni-20220921215523-10174    | jenkins | v1.27.0 | 21 Sep 22 22:04 UTC | 21 Sep 22 22:04 UTC |
	|         | enable-default-cni-20220921215523-10174    |                                            |         |         |                     |                     |
	|         | pgrep -a kubelet                           |                                            |         |         |                     |                     |
	| delete  | -p bridge-20220921215523-10174             | bridge-20220921215523-10174                | jenkins | v1.27.0 | 21 Sep 22 22:07 UTC | 21 Sep 22 22:07 UTC |
	| start   | -p                                         | old-k8s-version-20220921220722-10174       | jenkins | v1.27.0 | 21 Sep 22 22:07 UTC |                     |
	|         | old-k8s-version-20220921220722-10174       |                                            |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr            |                                            |         |         |                     |                     |
	|         | --wait=true --kvm-network=default          |                                            |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system              |                                            |         |         |                     |                     |
	|         | --disable-driver-mounts                    |                                            |         |         |                     |                     |
	|         | --keep-context=false --driver=docker       |                                            |         |         |                     |                     |
	|         |  --container-runtime=containerd            |                                            |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0               |                                            |         |         |                     |                     |
	| delete  | -p calico-20220921215524-10174             | calico-20220921215524-10174                | jenkins | v1.27.0 | 21 Sep 22 22:08 UTC | 21 Sep 22 22:08 UTC |
	| delete  | -p                                         | disable-driver-mounts-20220921220831-10174 | jenkins | v1.27.0 | 21 Sep 22 22:08 UTC | 21 Sep 22 22:08 UTC |
	|         | disable-driver-mounts-20220921220831-10174 |                                            |         |         |                     |                     |
	| start   | -p                                         | no-preload-20220921220832-10174            | jenkins | v1.27.0 | 21 Sep 22 22:08 UTC |                     |
	|         | no-preload-20220921220832-10174            |                                            |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr            |                                            |         |         |                     |                     |
	|         | --wait=true --preload=false                |                                            |         |         |                     |                     |
	|         | --driver=docker                            |                                            |         |         |                     |                     |
	|         | --container-runtime=containerd             |                                            |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.2               |                                            |         |         |                     |                     |
	|---------|--------------------------------------------|--------------------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/09/21 22:08:32
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.19.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0921 22:08:32.091715  242109 out.go:296] Setting OutFile to fd 1 ...
	I0921 22:08:32.091884  242109 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:08:32.091894  242109 out.go:309] Setting ErrFile to fd 2...
	I0921 22:08:32.091899  242109 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:08:32.091992  242109 root.go:333] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/bin
	I0921 22:08:32.092577  242109 out.go:303] Setting JSON to false
	I0921 22:08:32.094002  242109 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3063,"bootTime":1663795049,"procs":521,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1017-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0921 22:08:32.094075  242109 start.go:125] virtualization: kvm guest
	I0921 22:08:32.096710  242109 out.go:177] * [no-preload-20220921220832-10174] minikube v1.27.0 on Ubuntu 20.04 (kvm/amd64)
	I0921 22:08:32.098227  242109 out.go:177]   - MINIKUBE_LOCATION=14995
	I0921 22:08:32.098237  242109 notify.go:214] Checking for updates...
	I0921 22:08:32.099707  242109 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0921 22:08:32.101331  242109 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
	I0921 22:08:32.103017  242109 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube
	I0921 22:08:32.104848  242109 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0921 22:08:32.106542  242109 config.go:180] Loaded profile config "embed-certs-20220921220439-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.2
	I0921 22:08:32.106655  242109 config.go:180] Loaded profile config "enable-default-cni-20220921215523-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.2
	I0921 22:08:32.106779  242109 config.go:180] Loaded profile config "old-k8s-version-20220921220722-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0921 22:08:32.106858  242109 driver.go:365] Setting default libvirt URI to qemu:///system
	I0921 22:08:32.139816  242109 docker.go:137] docker version: linux-20.10.18
	I0921 22:08:32.139902  242109 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:08:32.232213  242109 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:49 SystemTime:2022-09-21 22:08:32.16163627 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1017-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.18 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientI
nfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0921 22:08:32.232321  242109 docker.go:254] overlay module found
	I0921 22:08:32.234175  242109 out.go:177] * Using the docker driver based on user configuration
	I0921 22:08:28.393162  236690 system_pods.go:86] 4 kube-system pods found
	I0921 22:08:28.393193  236690 system_pods.go:89] "coredns-5644d7b6d9-mvb9z" [e0c5751f-8edb-4a05-98f3-f275f4311012] Running
	I0921 22:08:28.393199  236690 system_pods.go:89] "kindnet-4dx68" [57f7b124-8ad9-4c40-90fc-b97f4ee44d41] Running
	I0921 22:08:28.393203  236690 system_pods.go:89] "kube-proxy-fxg44" [1cad27a7-3f78-4f83-b317-4902331848a0] Running
	I0921 22:08:28.393207  236690 system_pods.go:89] "storage-provisioner" [cbbbdb01-a62f-468c-9a5b-9d29521ebbaf] Running
	I0921 22:08:28.393219  236690 retry.go:31] will retry after 3.11822781s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0921 22:08:31.515328  236690 system_pods.go:86] 4 kube-system pods found
	I0921 22:08:31.515356  236690 system_pods.go:89] "coredns-5644d7b6d9-mvb9z" [e0c5751f-8edb-4a05-98f3-f275f4311012] Running
	I0921 22:08:31.515362  236690 system_pods.go:89] "kindnet-4dx68" [57f7b124-8ad9-4c40-90fc-b97f4ee44d41] Running
	I0921 22:08:31.515367  236690 system_pods.go:89] "kube-proxy-fxg44" [1cad27a7-3f78-4f83-b317-4902331848a0] Running
	I0921 22:08:31.515371  236690 system_pods.go:89] "storage-provisioner" [cbbbdb01-a62f-468c-9a5b-9d29521ebbaf] Running
	I0921 22:08:31.515386  236690 retry.go:31] will retry after 4.276119362s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0921 22:08:32.235555  242109 start.go:284] selected driver: docker
	I0921 22:08:32.235579  242109 start.go:808] validating driver "docker" against <nil>
	I0921 22:08:32.235600  242109 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0921 22:08:32.236602  242109 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:08:32.331368  242109 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:49 SystemTime:2022-09-21 22:08:32.257384866 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1017-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.18 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0921 22:08:32.331533  242109 start_flags.go:302] no existing cluster config was found, will generate one from the flags 
	I0921 22:08:32.331680  242109 start_flags.go:867] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0921 22:08:32.333663  242109 out.go:177] * Using Docker driver with root privileges
	I0921 22:08:32.335123  242109 cni.go:95] Creating CNI manager for ""
	I0921 22:08:32.335145  242109 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0921 22:08:32.335167  242109 start_flags.go:311] Found "CNI" CNI - setting NetworkPlugin=cni
	I0921 22:08:32.335186  242109 start_flags.go:316] config:
	{Name:no-preload-20220921220832-10174 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:no-preload-20220921220832-10174 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 22:08:32.336990  242109 out.go:177] * Starting control plane node no-preload-20220921220832-10174 in cluster no-preload-20220921220832-10174
	I0921 22:08:32.338434  242109 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0921 22:08:32.339908  242109 out.go:177] * Pulling base image ...
	I0921 22:08:32.341304  242109 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime containerd
	I0921 22:08:32.341332  242109 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	I0921 22:08:32.341430  242109 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/no-preload-20220921220832-10174/config.json ...
	I0921 22:08:32.341468  242109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/no-preload-20220921220832-10174/config.json: {Name:mk8234a18099321a9a3e41526d762960614698ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:08:32.341583  242109 cache.go:107] acquiring lock: {Name:mk964a2e66a5444defeab854e6434a6f27bdb527 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:08:32.341604  242109 cache.go:107] acquiring lock: {Name:mk0eb3fbf1ee9e76ad78bfdee22277edae17ed2a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:08:32.341628  242109 cache.go:107] acquiring lock: {Name:mk944562b9b2415f3d8e7ad36b373f92205bdb5c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:08:32.341694  242109 cache.go:107] acquiring lock: {Name:mka10a341c76ae214d12cf65b1bbb970ff641c5f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:08:32.341730  242109 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.4-0 exists
	I0921 22:08:32.341677  242109 cache.go:107] acquiring lock: {Name:mk6ae321142fb89935897137e30217f9ae2499ae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:08:32.341737  242109 cache.go:107] acquiring lock: {Name:mkb5c943b9da9e6c7ecc443b377ab990272f1b2f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:08:32.341746  242109 cache.go:107] acquiring lock: {Name:mk4fab6516978f221b8246a61f380f8ab97f066c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:08:32.341680  242109 cache.go:107] acquiring lock: {Name:mkee4799116b59e3f65d0127cdad0c25a01a05e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:08:32.341783  242109 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.25.2 exists
	I0921 22:08:32.341788  242109 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.25.2 exists
	I0921 22:08:32.341791  242109 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0921 22:08:32.341805  242109 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.25.2" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.25.2" took 211.656µs
	I0921 22:08:32.341807  242109 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.25.2" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.25.2" took 148.262µs
	I0921 22:08:32.341812  242109 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.25.2 exists
	I0921 22:08:32.341816  242109 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.25.2 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.25.2 succeeded
	I0921 22:08:32.341821  242109 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.25.2 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.25.2 succeeded
	I0921 22:08:32.341808  242109 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 233.334µs
	I0921 22:08:32.341830  242109 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.25.2" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.25.2" took 162.443µs
	I0921 22:08:32.341847  242109 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.25.2 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.25.2 succeeded
	I0921 22:08:32.341846  242109 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.9.3 exists
	I0921 22:08:32.341838  242109 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0921 22:08:32.341824  242109 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/pause_3.8 exists
	I0921 22:08:32.341876  242109 cache.go:96] cache image "registry.k8s.io/pause:3.8" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/pause_3.8" took 204.098µs
	I0921 22:08:32.341874  242109 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.9.3" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.9.3" took 192.241µs
	I0921 22:08:32.341753  242109 cache.go:96] cache image "registry.k8s.io/etcd:3.5.4-0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.4-0" took 136.592µs
	I0921 22:08:32.341891  242109 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.4-0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.4-0 succeeded
	I0921 22:08:32.341885  242109 cache.go:80] save to tar file registry.k8s.io/pause:3.8 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/pause_3.8 succeeded
	I0921 22:08:32.341891  242109 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.9.3 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.9.3 succeeded
	I0921 22:08:32.341823  242109 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.25.2 exists
	I0921 22:08:32.341914  242109 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.25.2" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.25.2" took 286.498µs
	I0921 22:08:32.341930  242109 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.25.2 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.25.2 succeeded
	I0921 22:08:32.341939  242109 cache.go:87] Successfully saved all images to host disk.
	I0921 22:08:32.366473  242109 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon, skipping pull
	I0921 22:08:32.366496  242109 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c exists in daemon, skipping load
	I0921 22:08:32.366514  242109 cache.go:208] Successfully downloaded all kic artifacts
	I0921 22:08:32.366548  242109 start.go:364] acquiring machines lock for no-preload-20220921220832-10174: {Name:mk189db360f5ac486cb35206c34214af6d1c65b0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:08:32.366677  242109 start.go:368] acquired machines lock for "no-preload-20220921220832-10174" in 107.952µs
	I0921 22:08:32.366708  242109 start.go:93] Provisioning new machine with config: &{Name:no-preload-20220921220832-10174 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:no-preload-20220921220832-10174 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0921 22:08:32.366803  242109 start.go:125] createHost starting for "" (driver="docker")
	I0921 22:08:31.523644  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:08:33.524152  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:08:32.369209  242109 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0921 22:08:32.369459  242109 start.go:159] libmachine.API.Create for "no-preload-20220921220832-10174" (driver="docker")
	I0921 22:08:32.369482  242109 client.go:168] LocalClient.Create starting
	I0921 22:08:32.369604  242109 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem
	I0921 22:08:32.369644  242109 main.go:134] libmachine: Decoding PEM data...
	I0921 22:08:32.369665  242109 main.go:134] libmachine: Parsing certificate...
	I0921 22:08:32.369721  242109 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/cert.pem
	I0921 22:08:32.369753  242109 main.go:134] libmachine: Decoding PEM data...
	I0921 22:08:32.369769  242109 main.go:134] libmachine: Parsing certificate...
	I0921 22:08:32.370156  242109 cli_runner.go:164] Run: docker network inspect no-preload-20220921220832-10174 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:08:32.394560  242109 cli_runner.go:211] docker network inspect no-preload-20220921220832-10174 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:08:32.394644  242109 network_create.go:272] running [docker network inspect no-preload-20220921220832-10174] to gather additional debugging logs...
	I0921 22:08:32.394665  242109 cli_runner.go:164] Run: docker network inspect no-preload-20220921220832-10174
	W0921 22:08:32.417810  242109 cli_runner.go:211] docker network inspect no-preload-20220921220832-10174 returned with exit code 1
	I0921 22:08:32.417843  242109 network_create.go:275] error running [docker network inspect no-preload-20220921220832-10174]: docker network inspect no-preload-20220921220832-10174: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: no-preload-20220921220832-10174
	I0921 22:08:32.417860  242109 network_create.go:277] output of [docker network inspect no-preload-20220921220832-10174]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: no-preload-20220921220832-10174
	
	** /stderr **
	I0921 22:08:32.417923  242109 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 22:08:32.443298  242109 network.go:241] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-b7c23e57d062 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:a3:39:9d:03}}
	I0921 22:08:32.444161  242109 network.go:241] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-bfa8cb3d5f9b IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:8c:39:36:0c}}
	I0921 22:08:32.444798  242109 network.go:241] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName:br-e71aa30fd3ac IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:7a:b1:c8:c1}}
	I0921 22:08:32.445526  242109 network.go:241] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName:br-4f93bc2f061a IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:ca:b2:42:ce}}
	I0921 22:08:32.446196  242109 network.go:241] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 Interface:{IfaceName:br-4878e8461754 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:02:42:13:30:5c:d0}}
	I0921 22:08:32.447143  242109 network.go:290] reserving subnet 192.168.94.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.94.0:0xc0004de080] misses:0}
	I0921 22:08:32.447176  242109 network.go:236] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 22:08:32.447187  242109 network_create.go:115] attempt to create docker network no-preload-20220921220832-10174 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I0921 22:08:32.447236  242109 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-20220921220832-10174 no-preload-20220921220832-10174
	I0921 22:08:32.508336  242109 network_create.go:99] docker network no-preload-20220921220832-10174 192.168.94.0/24 created
	I0921 22:08:32.508374  242109 kic.go:106] calculated static IP "192.168.94.2" for the "no-preload-20220921220832-10174" container
	I0921 22:08:32.508432  242109 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0921 22:08:32.534911  242109 cli_runner.go:164] Run: docker volume create no-preload-20220921220832-10174 --label name.minikube.sigs.k8s.io=no-preload-20220921220832-10174 --label created_by.minikube.sigs.k8s.io=true
	I0921 22:08:32.559222  242109 oci.go:103] Successfully created a docker volume no-preload-20220921220832-10174
	I0921 22:08:32.559322  242109 cli_runner.go:164] Run: docker run --rm --name no-preload-20220921220832-10174-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-20220921220832-10174 --entrypoint /usr/bin/test -v no-preload-20220921220832-10174:/var gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c -d /var/lib
	I0921 22:08:33.139204  242109 oci.go:107] Successfully prepared a docker volume no-preload-20220921220832-10174
	I0921 22:08:33.139255  242109 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime containerd
	W0921 22:08:33.139369  242109 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0921 22:08:33.139459  242109 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0921 22:08:33.234984  242109 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-20220921220832-10174 --name no-preload-20220921220832-10174 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-20220921220832-10174 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-20220921220832-10174 --network no-preload-20220921220832-10174 --ip 192.168.94.2 --volume no-preload-20220921220832-10174:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c
	I0921 22:08:33.616738  242109 cli_runner.go:164] Run: docker container inspect no-preload-20220921220832-10174 --format={{.State.Running}}
	I0921 22:08:33.645764  242109 cli_runner.go:164] Run: docker container inspect no-preload-20220921220832-10174 --format={{.State.Status}}
	I0921 22:08:33.671906  242109 cli_runner.go:164] Run: docker exec no-preload-20220921220832-10174 stat /var/lib/dpkg/alternatives/iptables
	I0921 22:08:33.749023  242109 oci.go:144] the created container "no-preload-20220921220832-10174" has a running status.
	I0921 22:08:33.749062  242109 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/no-preload-20220921220832-10174/id_rsa...
	I0921 22:08:33.954020  242109 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/no-preload-20220921220832-10174/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0921 22:08:34.034359  242109 cli_runner.go:164] Run: docker container inspect no-preload-20220921220832-10174 --format={{.State.Status}}
	I0921 22:08:34.061636  242109 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0921 22:08:34.061659  242109 kic_runner.go:114] Args: [docker exec --privileged no-preload-20220921220832-10174 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0921 22:08:34.136878  242109 cli_runner.go:164] Run: docker container inspect no-preload-20220921220832-10174 --format={{.State.Status}}
	I0921 22:08:34.163571  242109 machine.go:88] provisioning docker machine ...
	I0921 22:08:34.163605  242109 ubuntu.go:169] provisioning hostname "no-preload-20220921220832-10174"
	I0921 22:08:34.163657  242109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220832-10174
	I0921 22:08:34.189546  242109 main.go:134] libmachine: Using SSH client type: native
	I0921 22:08:34.189772  242109 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ec520] 0x7ef6a0 <nil>  [] 0s} 127.0.0.1 49408 <nil> <nil>}
	I0921 22:08:34.189795  242109 main.go:134] libmachine: About to run SSH command:
	sudo hostname no-preload-20220921220832-10174 && echo "no-preload-20220921220832-10174" | sudo tee /etc/hostname
	I0921 22:08:34.327956  242109 main.go:134] libmachine: SSH cmd err, output: <nil>: no-preload-20220921220832-10174
	
	I0921 22:08:34.328041  242109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220832-10174
	I0921 22:08:34.354338  242109 main.go:134] libmachine: Using SSH client type: native
	I0921 22:08:34.354493  242109 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ec520] 0x7ef6a0 <nil>  [] 0s} 127.0.0.1 49408 <nil> <nil>}
	I0921 22:08:34.354515  242109 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-20220921220832-10174' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-20220921220832-10174/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-20220921220832-10174' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0921 22:08:34.483530  242109 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0921 22:08:34.483570  242109 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube}
	I0921 22:08:34.483615  242109 ubuntu.go:177] setting up certificates
	I0921 22:08:34.483625  242109 provision.go:83] configureAuth start
	I0921 22:08:34.483683  242109 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20220921220832-10174
	I0921 22:08:34.511261  242109 provision.go:138] copyHostCerts
	I0921 22:08:34.511329  242109 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.pem, removing ...
	I0921 22:08:34.511341  242109 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.pem
	I0921 22:08:34.511416  242109 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.pem (1078 bytes)
	I0921 22:08:34.511514  242109 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cert.pem, removing ...
	I0921 22:08:34.511530  242109 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cert.pem
	I0921 22:08:34.511571  242109 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cert.pem (1123 bytes)
	I0921 22:08:34.511683  242109 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/key.pem, removing ...
	I0921 22:08:34.511702  242109 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/key.pem
	I0921 22:08:34.511774  242109 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/key.pem (1675 bytes)
	I0921 22:08:34.511857  242109 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca-key.pem org=jenkins.no-preload-20220921220832-10174 san=[192.168.94.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-20220921220832-10174]
	I0921 22:08:34.690415  242109 provision.go:172] copyRemoteCerts
	I0921 22:08:34.690468  242109 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0921 22:08:34.690855  242109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220832-10174
	I0921 22:08:34.716097  242109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49408 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/no-preload-20220921220832-10174/id_rsa Username:docker}
	I0921 22:08:34.807600  242109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0921 22:08:34.826933  242109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0921 22:08:34.844386  242109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0921 22:08:34.861468  242109 provision.go:86] duration metric: configureAuth took 377.832384ms
	I0921 22:08:34.861491  242109 ubuntu.go:193] setting minikube options for container-runtime
	I0921 22:08:34.861655  242109 config.go:180] Loaded profile config "no-preload-20220921220832-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.2
	I0921 22:08:34.861668  242109 machine.go:91] provisioned docker machine in 698.07649ms
	I0921 22:08:34.861675  242109 client.go:171] LocalClient.Create took 2.49218544s
	I0921 22:08:34.861696  242109 start.go:167] duration metric: libmachine.API.Create for "no-preload-20220921220832-10174" took 2.492236327s
	I0921 22:08:34.861710  242109 start.go:300] post-start starting for "no-preload-20220921220832-10174" (driver="docker")
	I0921 22:08:34.861721  242109 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0921 22:08:34.861758  242109 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0921 22:08:34.861812  242109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220832-10174
	I0921 22:08:34.886578  242109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49408 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/no-preload-20220921220832-10174/id_rsa Username:docker}
	I0921 22:08:34.979345  242109 ssh_runner.go:195] Run: cat /etc/os-release
	I0921 22:08:34.982130  242109 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0921 22:08:34.982152  242109 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0921 22:08:34.982162  242109 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0921 22:08:34.982168  242109 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0921 22:08:34.982186  242109 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/addons for local assets ...
	I0921 22:08:34.982233  242109 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files for local assets ...
	I0921 22:08:34.982302  242109 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/101742.pem -> 101742.pem in /etc/ssl/certs
	I0921 22:08:34.982377  242109 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0921 22:08:34.988919  242109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/101742.pem --> /etc/ssl/certs/101742.pem (1708 bytes)
	I0921 22:08:35.005937  242109 start.go:303] post-start completed in 144.212626ms
	I0921 22:08:35.006269  242109 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20220921220832-10174
	I0921 22:08:35.031597  242109 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/no-preload-20220921220832-10174/config.json ...
	I0921 22:08:35.031860  242109 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:08:35.031899  242109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220832-10174
	I0921 22:08:35.057271  242109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49408 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/no-preload-20220921220832-10174/id_rsa Username:docker}
	I0921 22:08:35.144359  242109 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:08:35.148566  242109 start.go:128] duration metric: createHost completed in 2.781750164s
	I0921 22:08:35.148594  242109 start.go:83] releasing machines lock for "no-preload-20220921220832-10174", held for 2.781899801s
	I0921 22:08:35.148673  242109 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20220921220832-10174
	I0921 22:08:35.174873  242109 ssh_runner.go:195] Run: systemctl --version
	I0921 22:08:35.174925  242109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220832-10174
	I0921 22:08:35.174956  242109 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0921 22:08:35.175024  242109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220832-10174
	I0921 22:08:35.201765  242109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49408 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/no-preload-20220921220832-10174/id_rsa Username:docker}
	I0921 22:08:35.203707  242109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49408 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/no-preload-20220921220832-10174/id_rsa Username:docker}
	I0921 22:08:35.321934  242109 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0921 22:08:35.332356  242109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0921 22:08:35.342047  242109 docker.go:188] disabling docker service ...
	I0921 22:08:35.342105  242109 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0921 22:08:35.360459  242109 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0921 22:08:35.370066  242109 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0921 22:08:35.448272  242109 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0921 22:08:35.530327  242109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0921 22:08:35.539747  242109 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0921 22:08:35.552260  242109 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "registry.k8s.io/pause:3.8"|' -i /etc/containerd/config.toml"
	I0921 22:08:35.560238  242109 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0921 22:08:35.568221  242109 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0921 22:08:35.575657  242109 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0921 22:08:35.582966  242109 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0921 22:08:35.589047  242109 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0921 22:08:35.595109  242109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0921 22:08:35.673595  242109 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0921 22:08:35.752751  242109 start.go:450] Will wait 60s for socket path /run/containerd/containerd.sock
	I0921 22:08:35.752814  242109 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0921 22:08:35.756381  242109 start.go:471] Will wait 60s for crictl version
	I0921 22:08:35.756424  242109 ssh_runner.go:195] Run: sudo crictl version
	I0921 22:08:35.780466  242109 start.go:480] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.8
	RuntimeApiVersion:  v1alpha2
	I0921 22:08:35.780526  242109 ssh_runner.go:195] Run: containerd --version
	I0921 22:08:35.811401  242109 ssh_runner.go:195] Run: containerd --version
	I0921 22:08:35.844655  242109 out.go:177] * Preparing Kubernetes v1.25.2 on containerd 1.6.8 ...
	I0921 22:08:35.846055  242109 cli_runner.go:164] Run: docker network inspect no-preload-20220921220832-10174 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 22:08:35.869527  242109 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0921 22:08:35.872805  242109 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0921 22:08:35.882612  242109 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime containerd
	I0921 22:08:35.882651  242109 ssh_runner.go:195] Run: sudo crictl images --output json
	I0921 22:08:35.904673  242109 containerd.go:549] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.25.2". assuming images are not preloaded.
	I0921 22:08:35.904696  242109 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.25.2 registry.k8s.io/kube-controller-manager:v1.25.2 registry.k8s.io/kube-scheduler:v1.25.2 registry.k8s.io/kube-proxy:v1.25.2 registry.k8s.io/pause:3.8 registry.k8s.io/etcd:3.5.4-0 registry.k8s.io/coredns/coredns:v1.9.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0921 22:08:35.904767  242109 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.25.2
	I0921 22:08:35.904796  242109 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.25.2
	I0921 22:08:35.904812  242109 image.go:134] retrieving image: registry.k8s.io/pause:3.8
	I0921 22:08:35.904824  242109 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.25.2
	I0921 22:08:35.904815  242109 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.4-0
	I0921 22:08:35.904796  242109 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.9.3
	I0921 22:08:35.904779  242109 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.25.2
	I0921 22:08:35.904773  242109 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0921 22:08:35.905925  242109 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0921 22:08:35.905934  242109 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.25.2: Error: No such image: registry.k8s.io/kube-controller-manager:v1.25.2
	I0921 22:08:35.905924  242109 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.25.2: Error: No such image: registry.k8s.io/kube-proxy:v1.25.2
	I0921 22:08:35.905951  242109 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.25.2: Error: No such image: registry.k8s.io/kube-apiserver:v1.25.2
	I0921 22:08:35.905920  242109 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.25.2: Error: No such image: registry.k8s.io/kube-scheduler:v1.25.2
	I0921 22:08:35.905934  242109 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.4-0: Error: No such image: registry.k8s.io/etcd:3.5.4-0
	I0921 22:08:35.905935  242109 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.9.3: Error: No such image: registry.k8s.io/coredns/coredns:v1.9.3
	I0921 22:08:35.905937  242109 image.go:177] daemon lookup for registry.k8s.io/pause:3.8: Error: No such image: registry.k8s.io/pause:3.8
	I0921 22:08:36.396123  242109 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/pause:3.8"
	I0921 22:08:36.418515  242109 cache_images.go:116] "registry.k8s.io/pause:3.8" needs transfer: "registry.k8s.io/pause:3.8" does not exist at hash "4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517" in container runtime
	I0921 22:08:36.418560  242109 cri.go:216] Removing image: registry.k8s.io/pause:3.8
	I0921 22:08:36.418594  242109 ssh_runner.go:195] Run: which crictl
	I0921 22:08:36.421313  242109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.8
	I0921 22:08:36.435188  242109 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/etcd:3.5.4-0"
	I0921 22:08:36.445557  242109 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/pause_3.8
	I0921 22:08:36.445641  242109 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.8
	I0921 22:08:36.448412  242109 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-apiserver:v1.25.2"
	I0921 22:08:36.457433  242109 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-proxy:v1.25.2"
	I0921 22:08:36.457801  242109 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.8: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.8: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/pause_3.8': No such file or directory
	I0921 22:08:36.457831  242109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/pause_3.8 --> /var/lib/minikube/images/pause_3.8 (311296 bytes)
	I0921 22:08:36.457836  242109 cache_images.go:116] "registry.k8s.io/etcd:3.5.4-0" needs transfer: "registry.k8s.io/etcd:3.5.4-0" does not exist at hash "a8a176a5d5d698f9409dc246f81fa69d37d4a2f4132ba5e62e72a78476b27f66" in container runtime
	I0921 22:08:36.457881  242109 cri.go:216] Removing image: registry.k8s.io/etcd:3.5.4-0
	I0921 22:08:36.457922  242109 ssh_runner.go:195] Run: which crictl
	I0921 22:08:36.460659  242109 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-scheduler:v1.25.2"
	I0921 22:08:36.463542  242109 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/coredns/coredns:v1.9.3"
	I0921 22:08:36.479691  242109 containerd.go:233] Loading image: /var/lib/minikube/images/pause_3.8
	I0921 22:08:36.479829  242109 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.8
	I0921 22:08:36.485349  242109 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.25.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.25.2" does not exist at hash "97801f83949087fbdcc09b1c84ddda0ed5d01f4aabd17787a7714eb2796082b3" in container runtime
	I0921 22:08:36.485408  242109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.4-0
	I0921 22:08:36.485441  242109 cri.go:216] Removing image: registry.k8s.io/kube-apiserver:v1.25.2
	I0921 22:08:36.485482  242109 ssh_runner.go:195] Run: which crictl
	I0921 22:08:36.485349  242109 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.25.2" needs transfer: "registry.k8s.io/kube-proxy:v1.25.2" does not exist at hash "1c7d8c51823b5eb08189d553d911097ec8a6a40fea40bb5bdea91842f30d2e86" in container runtime
	I0921 22:08:36.485551  242109 cri.go:216] Removing image: registry.k8s.io/kube-proxy:v1.25.2
	I0921 22:08:36.485599  242109 ssh_runner.go:195] Run: which crictl
	I0921 22:08:36.487598  242109 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.25.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.25.2" does not exist at hash "ca0ea1ee3cfd3d1ced15a8e6f4a236a436c5733b20a0b2dbbfbfd59977e12959" in container runtime
	I0921 22:08:36.487632  242109 cri.go:216] Removing image: registry.k8s.io/kube-scheduler:v1.25.2
	I0921 22:08:36.487660  242109 ssh_runner.go:195] Run: which crictl
	I0921 22:08:36.493591  242109 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.9.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.9.3" does not exist at hash "5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a" in container runtime
	I0921 22:08:36.493629  242109 cri.go:216] Removing image: registry.k8s.io/coredns/coredns:v1.9.3
	I0921 22:08:36.493657  242109 ssh_runner.go:195] Run: which crictl
	I0921 22:08:36.506710  242109 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-controller-manager:v1.25.2"
	I0921 22:08:36.626371  242109 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/pause_3.8 from cache
	I0921 22:08:36.626472  242109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.25.2
	I0921 22:08:36.626536  242109 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.4-0
	I0921 22:08:36.626616  242109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.25.2
	I0921 22:08:36.626623  242109 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.4-0
	I0921 22:08:36.626652  242109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.25.2
	I0921 22:08:36.626691  242109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.9.3
	I0921 22:08:36.626719  242109 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.25.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.25.2" does not exist at hash "dbfceb93c69b6d85661fe46c3e50de9e927e4895ebba2892a1db116e69c81890" in container runtime
	I0921 22:08:36.626756  242109 cri.go:216] Removing image: registry.k8s.io/kube-controller-manager:v1.25.2
	I0921 22:08:36.626784  242109 ssh_runner.go:195] Run: which crictl
	I0921 22:08:36.677935  242109 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.25.2
	I0921 22:08:36.678001  242109 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.25.2
	I0921 22:08:36.678015  242109 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.4-0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/etcd_3.5.4-0': No such file or directory
	I0921 22:08:36.678033  242109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.4-0 --> /var/lib/minikube/images/etcd_3.5.4-0 (102160384 bytes)
	I0921 22:08:36.678081  242109 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.9.3
	I0921 22:08:36.678037  242109 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.25.2
	I0921 22:08:36.678082  242109 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.25.2
	I0921 22:08:36.678151  242109 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.9.3
	I0921 22:08:36.683332  242109 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.25.2
	I0921 22:08:36.683384  242109 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.9.3: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.9.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/coredns_v1.9.3': No such file or directory
	I0921 22:08:36.683420  242109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.9.3 --> /var/lib/minikube/images/coredns_v1.9.3 (14839296 bytes)
	I0921 22:08:36.683447  242109 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.25.2
	I0921 22:08:36.683467  242109 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.25.2: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.25.2: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-apiserver_v1.25.2': No such file or directory
	I0921 22:08:36.683488  242109 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.25.2: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.25.2: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-proxy_v1.25.2': No such file or directory
	I0921 22:08:36.683346  242109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.25.2
	I0921 22:08:36.683509  242109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.25.2 --> /var/lib/minikube/images/kube-proxy_v1.25.2 (20265472 bytes)
	I0921 22:08:36.683491  242109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.25.2 --> /var/lib/minikube/images/kube-apiserver_v1.25.2 (34238464 bytes)
	I0921 22:08:36.777292  242109 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.25.2
	I0921 22:08:36.777342  242109 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.25.2: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.25.2: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-scheduler_v1.25.2': No such file or directory
	I0921 22:08:36.777377  242109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.25.2 --> /var/lib/minikube/images/kube-scheduler_v1.25.2 (15798784 bytes)
	I0921 22:08:36.777402  242109 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.25.2
	I0921 22:08:36.814198  242109 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.25.2: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.25.2: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-controller-manager_v1.25.2': No such file or directory
	I0921 22:08:36.814239  242109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.25.2 --> /var/lib/minikube/images/kube-controller-manager_v1.25.2 (31264256 bytes)
	I0921 22:08:36.816317  242109 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep gcr.io/k8s-minikube/storage-provisioner:v5"
	I0921 22:08:36.913665  242109 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0921 22:08:36.913720  242109 cri.go:216] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0921 22:08:36.913763  242109 ssh_runner.go:195] Run: which crictl
	I0921 22:08:36.976718  242109 containerd.go:233] Loading image: /var/lib/minikube/images/coredns_v1.9.3
	I0921 22:08:36.976800  242109 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.9.3
	I0921 22:08:36.978381  242109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0921 22:08:35.796442  236690 system_pods.go:86] 4 kube-system pods found
	I0921 22:08:35.796468  236690 system_pods.go:89] "coredns-5644d7b6d9-mvb9z" [e0c5751f-8edb-4a05-98f3-f275f4311012] Running
	I0921 22:08:35.796473  236690 system_pods.go:89] "kindnet-4dx68" [57f7b124-8ad9-4c40-90fc-b97f4ee44d41] Running
	I0921 22:08:35.796477  236690 system_pods.go:89] "kube-proxy-fxg44" [1cad27a7-3f78-4f83-b317-4902331848a0] Running
	I0921 22:08:35.796481  236690 system_pods.go:89] "storage-provisioner" [cbbbdb01-a62f-468c-9a5b-9d29521ebbaf] Running
	I0921 22:08:35.796494  236690 retry.go:31] will retry after 5.167232101s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0921 22:08:36.023278  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:08:38.023537  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:08:37.888410  242109 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.9.3 from cache
	I0921 22:08:37.888449  242109 containerd.go:233] Loading image: /var/lib/minikube/images/kube-proxy_v1.25.2
	I0921 22:08:37.888504  242109 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.25.2
	I0921 22:08:37.888508  242109 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0921 22:08:37.888581  242109 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0921 22:08:38.829468  242109 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.25.2 from cache
	I0921 22:08:38.829502  242109 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0921 22:08:38.829516  242109 containerd.go:233] Loading image: /var/lib/minikube/images/kube-scheduler_v1.25.2
	I0921 22:08:38.829527  242109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I0921 22:08:38.829554  242109 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.25.2
	I0921 22:08:39.652337  242109 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.25.2 from cache
	I0921 22:08:39.652378  242109 containerd.go:233] Loading image: /var/lib/minikube/images/kube-apiserver_v1.25.2
	I0921 22:08:39.652422  242109 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.25.2
	I0921 22:08:41.039289  242109 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.25.2: (1.386836292s)
	I0921 22:08:41.039320  242109 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.25.2 from cache
	I0921 22:08:41.039353  242109 containerd.go:233] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.25.2
	I0921 22:08:41.039389  242109 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.25.2
	I0921 22:08:40.966976  236690 system_pods.go:86] 4 kube-system pods found
	I0921 22:08:40.967004  236690 system_pods.go:89] "coredns-5644d7b6d9-mvb9z" [e0c5751f-8edb-4a05-98f3-f275f4311012] Running
	I0921 22:08:40.967010  236690 system_pods.go:89] "kindnet-4dx68" [57f7b124-8ad9-4c40-90fc-b97f4ee44d41] Running
	I0921 22:08:40.967015  236690 system_pods.go:89] "kube-proxy-fxg44" [1cad27a7-3f78-4f83-b317-4902331848a0] Running
	I0921 22:08:40.967018  236690 system_pods.go:89] "storage-provisioner" [cbbbdb01-a62f-468c-9a5b-9d29521ebbaf] Running
	I0921 22:08:40.967032  236690 retry.go:31] will retry after 6.994901864s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0921 22:08:40.023953  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:08:42.523855  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:08:44.524143  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:08:42.303033  242109 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.25.2: (1.263610588s)
	I0921 22:08:42.303060  242109 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.25.2 from cache
	I0921 22:08:42.303087  242109 containerd.go:233] Loading image: /var/lib/minikube/images/etcd_3.5.4-0
	I0921 22:08:42.303129  242109 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.4-0
	I0921 22:08:46.156350  242109 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.4-0: (3.853187961s)
	I0921 22:08:46.156380  242109 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.4-0 from cache
	I0921 22:08:46.156407  242109 containerd.go:233] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0921 22:08:46.156452  242109 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I0921 22:08:46.590489  242109 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0921 22:08:46.590543  242109 cache_images.go:123] Successfully loaded all cached images
	I0921 22:08:46.590551  242109 cache_images.go:92] LoadImages completed in 10.685842974s
	I0921 22:08:46.590602  242109 ssh_runner.go:195] Run: sudo crictl info
	I0921 22:08:46.615686  242109 cni.go:95] Creating CNI manager for ""
	I0921 22:08:46.615709  242109 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0921 22:08:46.615755  242109 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0921 22:08:46.615772  242109 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.25.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-20220921220832-10174 NodeName:no-preload-20220921220832-10174 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/mini
kube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
	I0921 22:08:46.615927  242109 kubeadm.go:161] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "no-preload-20220921220832-10174"
	  kubeletExtraArgs:
	    node-ip: 192.168.94.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0921 22:08:46.616044  242109 kubeadm.go:962] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=no-preload-20220921220832-10174 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.2 ClusterName:no-preload-20220921220832-10174 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0921 22:08:46.616099  242109 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.2
	I0921 22:08:46.623273  242109 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.25.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.25.2': No such file or directory
	
	Initiating transfer...
	I0921 22:08:46.623321  242109 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.25.2
	I0921 22:08:46.630033  242109 binary.go:76] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.25.2/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.25.2/bin/linux/amd64/kubectl.sha256
	I0921 22:08:46.630057  242109 binary.go:76] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.25.2/bin/linux/amd64/kubelet?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.25.2/bin/linux/amd64/kubelet.sha256
	I0921 22:08:46.630073  242109 binary.go:76] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.25.2/bin/linux/amd64/kubeadm?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.25.2/bin/linux/amd64/kubeadm.sha256
	I0921 22:08:46.630100  242109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0921 22:08:46.630114  242109 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.25.2/kubectl
	I0921 22:08:46.630154  242109 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.25.2/kubeadm
	I0921 22:08:46.633651  242109 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.25.2/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.25.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/binaries/v1.25.2/kubectl': No such file or directory
	I0921 22:08:46.633687  242109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/linux/amd64/v1.25.2/kubectl --> /var/lib/minikube/binaries/v1.25.2/kubectl (45015040 bytes)
	I0921 22:08:46.641795  242109 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.25.2/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.25.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/binaries/v1.25.2/kubeadm': No such file or directory
	I0921 22:08:46.641822  242109 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.25.2/kubelet
	I0921 22:08:46.641822  242109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/linux/amd64/v1.25.2/kubeadm --> /var/lib/minikube/binaries/v1.25.2/kubeadm (43798528 bytes)
	I0921 22:08:46.657920  242109 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.25.2/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.25.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/binaries/v1.25.2/kubelet': No such file or directory
	I0921 22:08:46.657958  242109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/linux/amd64/v1.25.2/kubelet --> /var/lib/minikube/binaries/v1.25.2/kubelet (114229208 bytes)
	I0921 22:08:47.042335  242109 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0921 22:08:47.049185  242109 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (524 bytes)
	I0921 22:08:47.064259  242109 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0921 22:08:47.077743  242109 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2060 bytes)
	I0921 22:08:47.091446  242109 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0921 22:08:47.094777  242109 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0921 22:08:47.104435  242109 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/no-preload-20220921220832-10174 for IP: 192.168.94.2
	I0921 22:08:47.104536  242109 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.key
	I0921 22:08:47.104571  242109 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/proxy-client-ca.key
	I0921 22:08:47.104617  242109 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/no-preload-20220921220832-10174/client.key
	I0921 22:08:47.104631  242109 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/no-preload-20220921220832-10174/client.crt with IP's: []
	I0921 22:08:47.322756  242109 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/no-preload-20220921220832-10174/client.crt ...
	I0921 22:08:47.322786  242109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/no-preload-20220921220832-10174/client.crt: {Name:mk85591f6c78ee9c1b821877f8a8e1ba8c002ea0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:08:47.322985  242109 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/no-preload-20220921220832-10174/client.key ...
	I0921 22:08:47.323000  242109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/no-preload-20220921220832-10174/client.key: {Name:mkd74bb0553ae0b96fa9591e0ef94fcbd07d1fca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:08:47.323087  242109 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/no-preload-20220921220832-10174/apiserver.key.ad8e880a
	I0921 22:08:47.323102  242109 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/no-preload-20220921220832-10174/apiserver.crt.ad8e880a with IP's: [192.168.94.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0921 22:08:47.483755  242109 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/no-preload-20220921220832-10174/apiserver.crt.ad8e880a ...
	I0921 22:08:47.483790  242109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/no-preload-20220921220832-10174/apiserver.crt.ad8e880a: {Name:mk48e4b74038505c40285e03d6ebaeb0f1a7facc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:08:47.484008  242109 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/no-preload-20220921220832-10174/apiserver.key.ad8e880a ...
	I0921 22:08:47.484027  242109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/no-preload-20220921220832-10174/apiserver.key.ad8e880a: {Name:mk3e8ff442c58e1eb897e504d0c2602cf9404be8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:08:47.484121  242109 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/no-preload-20220921220832-10174/apiserver.crt.ad8e880a -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/no-preload-20220921220832-10174/apiserver.crt
	I0921 22:08:47.484181  242109 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/no-preload-20220921220832-10174/apiserver.key.ad8e880a -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/no-preload-20220921220832-10174/apiserver.key
	I0921 22:08:47.484233  242109 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/no-preload-20220921220832-10174/proxy-client.key
	I0921 22:08:47.484249  242109 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/no-preload-20220921220832-10174/proxy-client.crt with IP's: []
	I0921 22:08:47.723751  242109 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/no-preload-20220921220832-10174/proxy-client.crt ...
	I0921 22:08:47.723784  242109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/no-preload-20220921220832-10174/proxy-client.crt: {Name:mk03b5ee8cea1d4f283d674c427e7d33342a4be8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:08:47.723972  242109 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/no-preload-20220921220832-10174/proxy-client.key ...
	I0921 22:08:47.723984  242109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/no-preload-20220921220832-10174/proxy-client.key: {Name:mkf65de18c4cb3a81c8c54e3c1c9e9fc7b6259b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:08:47.724155  242109 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/10174.pem (1338 bytes)
	W0921 22:08:47.724197  242109 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/10174_empty.pem, impossibly tiny 0 bytes
	I0921 22:08:47.724217  242109 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca-key.pem (1679 bytes)
	I0921 22:08:47.724246  242109 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem (1078 bytes)
	I0921 22:08:47.724271  242109 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/cert.pem (1123 bytes)
	I0921 22:08:47.724296  242109 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/key.pem (1675 bytes)
	I0921 22:08:47.724334  242109 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/101742.pem (1708 bytes)
	I0921 22:08:47.724847  242109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/no-preload-20220921220832-10174/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0921 22:08:47.743210  242109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/no-preload-20220921220832-10174/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0921 22:08:47.760616  242109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/no-preload-20220921220832-10174/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0921 22:08:47.777454  242109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/no-preload-20220921220832-10174/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0921 22:08:47.795008  242109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0921 22:08:47.813029  242109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0921 22:08:47.830326  242109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0921 22:08:47.846904  242109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0921 22:08:47.863979  242109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0921 22:08:47.881065  242109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/10174.pem --> /usr/share/ca-certificates/10174.pem (1338 bytes)
	I0921 22:08:47.897965  242109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/101742.pem --> /usr/share/ca-certificates/101742.pem (1708 bytes)
	I0921 22:08:47.914592  242109 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0921 22:08:47.927516  242109 ssh_runner.go:195] Run: openssl version
	I0921 22:08:47.932439  242109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0921 22:08:47.939571  242109 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0921 22:08:47.942739  242109 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Sep 21 21:28 /usr/share/ca-certificates/minikubeCA.pem
	I0921 22:08:47.942793  242109 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0921 22:08:47.947595  242109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0921 22:08:47.954844  242109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10174.pem && ln -fs /usr/share/ca-certificates/10174.pem /etc/ssl/certs/10174.pem"
	I0921 22:08:47.962246  242109 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10174.pem
	I0921 22:08:47.965549  242109 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Sep 21 21:32 /usr/share/ca-certificates/10174.pem
	I0921 22:08:47.965589  242109 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10174.pem
	I0921 22:08:47.970478  242109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10174.pem /etc/ssl/certs/51391683.0"
	I0921 22:08:47.977766  242109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/101742.pem && ln -fs /usr/share/ca-certificates/101742.pem /etc/ssl/certs/101742.pem"
	I0921 22:08:47.985200  242109 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/101742.pem
	I0921 22:08:47.988442  242109 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Sep 21 21:32 /usr/share/ca-certificates/101742.pem
	I0921 22:08:47.988488  242109 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/101742.pem
	I0921 22:08:47.993172  242109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/101742.pem /etc/ssl/certs/3ec20f2e.0"
	I0921 22:08:48.000573  242109 kubeadm.go:396] StartCluster: {Name:no-preload-20220921220832-10174 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:no-preload-20220921220832-10174 Namespace:default APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 22:08:48.000677  242109 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0921 22:08:48.000726  242109 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0921 22:08:48.025134  242109 cri.go:87] found id: ""
	I0921 22:08:48.025190  242109 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0921 22:08:48.032261  242109 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0921 22:08:48.039171  242109 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0921 22:08:48.039231  242109 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0921 22:08:48.046214  242109 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0921 22:08:48.046299  242109 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0921 22:08:48.088140  242109 kubeadm.go:317] [init] Using Kubernetes version: v1.25.2
	I0921 22:08:48.088209  242109 kubeadm.go:317] [preflight] Running pre-flight checks
	I0921 22:08:48.117792  242109 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I0921 22:08:48.117878  242109 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1017-gcp
	I0921 22:08:48.117923  242109 kubeadm.go:317] OS: Linux
	I0921 22:08:48.117984  242109 kubeadm.go:317] CGROUPS_CPU: enabled
	I0921 22:08:48.118081  242109 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I0921 22:08:48.118147  242109 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I0921 22:08:48.118219  242109 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I0921 22:08:48.118316  242109 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I0921 22:08:48.118385  242109 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I0921 22:08:48.118447  242109 kubeadm.go:317] CGROUPS_PIDS: enabled
	I0921 22:08:48.118555  242109 kubeadm.go:317] CGROUPS_HUGETLB: enabled
	I0921 22:08:48.118644  242109 kubeadm.go:317] CGROUPS_BLKIO: enabled
	I0921 22:08:48.180626  242109 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0921 22:08:48.180773  242109 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0921 22:08:48.180889  242109 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0921 22:08:48.297089  242109 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0921 22:08:47.024474  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:08:49.523436  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:08:48.299973  242109 out.go:204]   - Generating certificates and keys ...
	I0921 22:08:48.300084  242109 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0921 22:08:48.300159  242109 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0921 22:08:48.345407  242109 kubeadm.go:317] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0921 22:08:48.412890  242109 kubeadm.go:317] [certs] Generating "front-proxy-ca" certificate and key
	I0921 22:08:48.459640  242109 kubeadm.go:317] [certs] Generating "front-proxy-client" certificate and key
	I0921 22:08:48.537047  242109 kubeadm.go:317] [certs] Generating "etcd/ca" certificate and key
	I0921 22:08:48.688929  242109 kubeadm.go:317] [certs] Generating "etcd/server" certificate and key
	I0921 22:08:48.689106  242109 kubeadm.go:317] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-20220921220832-10174] and IPs [192.168.94.2 127.0.0.1 ::1]
	I0921 22:08:48.857202  242109 kubeadm.go:317] [certs] Generating "etcd/peer" certificate and key
	I0921 22:08:48.857367  242109 kubeadm.go:317] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-20220921220832-10174] and IPs [192.168.94.2 127.0.0.1 ::1]
	I0921 22:08:49.098125  242109 kubeadm.go:317] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0921 22:08:49.259620  242109 kubeadm.go:317] [certs] Generating "apiserver-etcd-client" certificate and key
	I0921 22:08:49.346098  242109 kubeadm.go:317] [certs] Generating "sa" key and public key
	I0921 22:08:49.346223  242109 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0921 22:08:49.494334  242109 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0921 22:08:49.729704  242109 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0921 22:08:49.888182  242109 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0921 22:08:50.100841  242109 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0921 22:08:50.112065  242109 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0921 22:08:50.112909  242109 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0921 22:08:50.112969  242109 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I0921 22:08:50.192663  242109 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0921 22:08:50.195940  242109 out.go:204]   - Booting up control plane ...
	I0921 22:08:50.196092  242109 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0921 22:08:50.197016  242109 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0921 22:08:50.197843  242109 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0921 22:08:50.198576  242109 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0921 22:08:50.200535  242109 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0921 22:08:47.965740  236690 system_pods.go:86] 4 kube-system pods found
	I0921 22:08:47.965766  236690 system_pods.go:89] "coredns-5644d7b6d9-mvb9z" [e0c5751f-8edb-4a05-98f3-f275f4311012] Running
	I0921 22:08:47.965772  236690 system_pods.go:89] "kindnet-4dx68" [57f7b124-8ad9-4c40-90fc-b97f4ee44d41] Running
	I0921 22:08:47.965776  236690 system_pods.go:89] "kube-proxy-fxg44" [1cad27a7-3f78-4f83-b317-4902331848a0] Running
	I0921 22:08:47.965780  236690 system_pods.go:89] "storage-provisioner" [cbbbdb01-a62f-468c-9a5b-9d29521ebbaf] Running
	I0921 22:08:47.965795  236690 retry.go:31] will retry after 7.91826225s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0921 22:08:51.523513  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:08:53.523787  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:08:56.703357  242109 kubeadm.go:317] [apiclient] All control plane components are healthy after 6.502776 seconds
	I0921 22:08:56.703469  242109 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0921 22:08:56.711328  242109 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0921 22:08:55.888673  236690 system_pods.go:86] 4 kube-system pods found
	I0921 22:08:55.888703  236690 system_pods.go:89] "coredns-5644d7b6d9-mvb9z" [e0c5751f-8edb-4a05-98f3-f275f4311012] Running
	I0921 22:08:55.888709  236690 system_pods.go:89] "kindnet-4dx68" [57f7b124-8ad9-4c40-90fc-b97f4ee44d41] Running
	I0921 22:08:55.888713  236690 system_pods.go:89] "kube-proxy-fxg44" [1cad27a7-3f78-4f83-b317-4902331848a0] Running
	I0921 22:08:55.888717  236690 system_pods.go:89] "storage-provisioner" [cbbbdb01-a62f-468c-9a5b-9d29521ebbaf] Running
	I0921 22:08:55.888731  236690 retry.go:31] will retry after 9.953714808s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0921 22:08:57.227288  242109 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs
	I0921 22:08:57.227573  242109 kubeadm.go:317] [mark-control-plane] Marking the node no-preload-20220921220832-10174 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0921 22:08:57.736874  242109 kubeadm.go:317] [bootstrap-token] Using token: uutotp.tqwybgup8rypvhi1
	I0921 22:08:57.738394  242109 out.go:204]   - Configuring RBAC rules ...
	I0921 22:08:57.738514  242109 kubeadm.go:317] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0921 22:08:57.741343  242109 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0921 22:08:57.745866  242109 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0921 22:08:57.747957  242109 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0921 22:08:57.749935  242109 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0921 22:08:57.751680  242109 kubeadm.go:317] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0921 22:08:57.758707  242109 kubeadm.go:317] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0921 22:08:57.969367  242109 kubeadm.go:317] [addons] Applied essential addon: CoreDNS
	I0921 22:08:58.180290  242109 kubeadm.go:317] [addons] Applied essential addon: kube-proxy
	I0921 22:08:58.181366  242109 kubeadm.go:317] 
	I0921 22:08:58.181465  242109 kubeadm.go:317] Your Kubernetes control-plane has initialized successfully!
	I0921 22:08:58.181486  242109 kubeadm.go:317] 
	I0921 22:08:58.181560  242109 kubeadm.go:317] To start using your cluster, you need to run the following as a regular user:
	I0921 22:08:58.181569  242109 kubeadm.go:317] 
	I0921 22:08:58.181589  242109 kubeadm.go:317]   mkdir -p $HOME/.kube
	I0921 22:08:58.181650  242109 kubeadm.go:317]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0921 22:08:58.181740  242109 kubeadm.go:317]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0921 22:08:58.181760  242109 kubeadm.go:317] 
	I0921 22:08:58.181832  242109 kubeadm.go:317] Alternatively, if you are the root user, you can run:
	I0921 22:08:58.181845  242109 kubeadm.go:317] 
	I0921 22:08:58.181920  242109 kubeadm.go:317]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0921 22:08:58.181934  242109 kubeadm.go:317] 
	I0921 22:08:58.181980  242109 kubeadm.go:317] You should now deploy a pod network to the cluster.
	I0921 22:08:58.182064  242109 kubeadm.go:317] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0921 22:08:58.182156  242109 kubeadm.go:317]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0921 22:08:58.182168  242109 kubeadm.go:317] 
	I0921 22:08:58.182279  242109 kubeadm.go:317] You can now join any number of control-plane nodes by copying certificate authorities
	I0921 22:08:58.182379  242109 kubeadm.go:317] and service account keys on each node and then running the following as root:
	I0921 22:08:58.182392  242109 kubeadm.go:317] 
	I0921 22:08:58.182481  242109 kubeadm.go:317]   kubeadm join control-plane.minikube.internal:8443 --token uutotp.tqwybgup8rypvhi1 \
	I0921 22:08:58.182570  242109 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:b419e756666ce2e30bdac31039f21ad7242d26e2b9c03a55c5bc6fe8dcfddcf7 \
	I0921 22:08:58.182608  242109 kubeadm.go:317] 	--control-plane 
	I0921 22:08:58.182621  242109 kubeadm.go:317] 
	I0921 22:08:58.182729  242109 kubeadm.go:317] Then you can join any number of worker nodes by running the following on each as root:
	I0921 22:08:58.182743  242109 kubeadm.go:317] 
	I0921 22:08:58.182860  242109 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8443 --token uutotp.tqwybgup8rypvhi1 \
	I0921 22:08:58.182966  242109 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:b419e756666ce2e30bdac31039f21ad7242d26e2b9c03a55c5bc6fe8dcfddcf7 
	I0921 22:08:58.184610  242109 kubeadm.go:317] W0921 22:08:48.080443    1165 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0921 22:08:58.185021  242109 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1017-gcp\n", err: exit status 1
	I0921 22:08:58.185195  242109 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0921 22:08:58.185219  242109 cni.go:95] Creating CNI manager for ""
	I0921 22:08:58.185230  242109 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0921 22:08:58.187251  242109 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0921 22:08:56.024197  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:08:58.523874  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:08:58.188674  242109 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0921 22:08:58.193540  242109 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.2/kubectl ...
	I0921 22:08:58.193563  242109 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0921 22:08:58.207619  242109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0921 22:08:58.977362  242109 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0921 22:08:58.977533  242109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:08:58.977535  242109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl label nodes minikube.k8s.io/version=v1.27.0 minikube.k8s.io/commit=937c68716dfaac5b5ffa3b6655158d5d3472b8c4 minikube.k8s.io/name=no-preload-20220921220832-10174 minikube.k8s.io/updated_at=2022_09_21T22_08_58_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:08:58.984616  242109 ops.go:34] apiserver oom_adj: -16
	I0921 22:08:59.085839  242109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:08:59.647822  242109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:09:00.147844  242109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:09:00.647817  242109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:09:01.148095  242109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:09:01.647490  242109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:09:01.023479  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:09:03.523345  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:09:02.147631  242109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:09:02.647704  242109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:09:03.147462  242109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:09:03.647170  242109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:09:04.148196  242109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:09:04.647896  242109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:09:05.147797  242109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:09:05.647835  242109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:09:06.147616  242109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:09:06.648057  242109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:09:05.847686  236690 system_pods.go:86] 8 kube-system pods found
	I0921 22:09:05.847782  236690 system_pods.go:89] "coredns-5644d7b6d9-mvb9z" [e0c5751f-8edb-4a05-98f3-f275f4311012] Running
	I0921 22:09:05.847797  236690 system_pods.go:89] "etcd-old-k8s-version-20220921220722-10174" [b7bd26c2-dfb1-46c3-bc09-b42c15312f51] Pending
	I0921 22:09:05.847802  236690 system_pods.go:89] "kindnet-4dx68" [57f7b124-8ad9-4c40-90fc-b97f4ee44d41] Running
	I0921 22:09:05.847812  236690 system_pods.go:89] "kube-apiserver-old-k8s-version-20220921220722-10174" [1669b826-d41e-4263-aee2-113fe00748a7] Pending
	I0921 22:09:05.847820  236690 system_pods.go:89] "kube-controller-manager-old-k8s-version-20220921220722-10174" [4ade8d6c-9a47-4fe5-8454-df4bfd172359] Pending
	I0921 22:09:05.847824  236690 system_pods.go:89] "kube-proxy-fxg44" [1cad27a7-3f78-4f83-b317-4902331848a0] Running
	I0921 22:09:05.847832  236690 system_pods.go:89] "kube-scheduler-old-k8s-version-20220921220722-10174" [096a5b25-4648-4853-856e-06654ea527b6] Pending
	I0921 22:09:05.847839  236690 system_pods.go:89] "storage-provisioner" [cbbbdb01-a62f-468c-9a5b-9d29521ebbaf] Running
	I0921 22:09:05.847853  236690 retry.go:31] will retry after 15.120437328s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0921 22:09:05.525556  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:09:08.023332  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:09:07.148138  242109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:09:07.648259  242109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:09:08.147324  242109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:09:08.647825  242109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:09:09.147235  242109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:09:09.647560  242109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:09:10.148226  242109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:09:10.647618  242109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:09:11.148232  242109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:09:11.443591  242109 kubeadm.go:1067] duration metric: took 12.466122573s to wait for elevateKubeSystemPrivileges.
	I0921 22:09:11.443629  242109 kubeadm.go:398] StartCluster complete in 23.443059645s
	I0921 22:09:11.443651  242109 settings.go:142] acquiring lock: {Name:mk2e017b0c75e33bad2b3a546d00214b38a21694 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:09:11.443796  242109 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
	I0921 22:09:11.445698  242109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig: {Name:mkdec5faf1bb182b50a3cd5458b352f38075dc15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0921 22:09:12.105891  242109 kapi.go:233] failed rescaling deployment, will retry: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I0921 22:09:13.224528  242109 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "no-preload-20220921220832-10174" rescaled to 1
	I0921 22:09:13.224598  242109 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0921 22:09:13.226122  242109 out.go:177] * Verifying Kubernetes components...
	I0921 22:09:13.224654  242109 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0921 22:09:13.224663  242109 addons.go:412] enableAddons start: toEnable=map[], additional=[]
	I0921 22:09:13.224812  242109 config.go:180] Loaded profile config "no-preload-20220921220832-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.2
	I0921 22:09:13.227415  242109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0921 22:09:13.227468  242109 addons.go:65] Setting storage-provisioner=true in profile "no-preload-20220921220832-10174"
	I0921 22:09:13.227491  242109 addons.go:153] Setting addon storage-provisioner=true in "no-preload-20220921220832-10174"
	W0921 22:09:13.227496  242109 addons.go:162] addon storage-provisioner should already be in state true
	I0921 22:09:13.227470  242109 addons.go:65] Setting default-storageclass=true in profile "no-preload-20220921220832-10174"
	I0921 22:09:13.227573  242109 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-20220921220832-10174"
	I0921 22:09:13.227538  242109 host.go:66] Checking if "no-preload-20220921220832-10174" exists ...
	I0921 22:09:13.228056  242109 cli_runner.go:164] Run: docker container inspect no-preload-20220921220832-10174 --format={{.State.Status}}
	I0921 22:09:13.228219  242109 cli_runner.go:164] Run: docker container inspect no-preload-20220921220832-10174 --format={{.State.Status}}
	I0921 22:09:13.263194  242109 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0921 22:09:13.276187  242109 addons.go:153] Setting addon default-storageclass=true in "no-preload-20220921220832-10174"
	W0921 22:09:13.277014  242109 addons.go:162] addon default-storageclass should already be in state true
	I0921 22:09:13.277169  242109 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0921 22:09:13.277190  242109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0921 22:09:13.277252  242109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220832-10174
	I0921 22:09:13.277322  242109 host.go:66] Checking if "no-preload-20220921220832-10174" exists ...
	I0921 22:09:13.277887  242109 cli_runner.go:164] Run: docker container inspect no-preload-20220921220832-10174 --format={{.State.Status}}
	I0921 22:09:13.312648  242109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49408 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/no-preload-20220921220832-10174/id_rsa Username:docker}
	I0921 22:09:13.317631  242109 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0921 22:09:13.317656  242109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0921 22:09:13.317711  242109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220832-10174
	I0921 22:09:13.328590  242109 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0921 22:09:13.330064  242109 node_ready.go:35] waiting up to 6m0s for node "no-preload-20220921220832-10174" to be "Ready" ...
	I0921 22:09:13.347171  242109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49408 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/no-preload-20220921220832-10174/id_rsa Username:docker}
	I0921 22:09:13.489796  242109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0921 22:09:13.493096  242109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0921 22:09:13.790286  242109 start.go:810] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS
	I0921 22:09:13.930600  242109 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0921 22:09:10.023391  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:09:12.023546  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:09:14.524173  228234 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:09:15.526233  228234 node_ready.go:38] duration metric: took 4m0.009209878s waiting for node "embed-certs-20220921220439-10174" to be "Ready" ...
	I0921 22:09:15.529629  228234 out.go:177] 
	W0921 22:09:15.531085  228234 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0921 22:09:15.531107  228234 out.go:239] * 
	W0921 22:09:15.532218  228234 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0921 22:09:15.534409  228234 out.go:177] 
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	173ec9492ce0e       d921cee849482       About a minute ago   Running             kindnet-cni               1                   01f72770699db
	ca78ef37b396f       d921cee849482       4 minutes ago        Exited              kindnet-cni               0                   01f72770699db
	2c132c99660ac       1c7d8c51823b5       4 minutes ago        Running             kube-proxy                0                   165337b73f95e
	4c0ef4a5b3254       97801f8394908       4 minutes ago        Running             kube-apiserver            0                   21fbb3d04e7ee
	6dc0cbf3dcda3       dbfceb93c69b6       4 minutes ago        Running             kube-controller-manager   0                   c9f16d90611ed
	07e2b5e608591       a8a176a5d5d69       4 minutes ago        Running             etcd                      0                   c5347b7c3fd3f
	50596ff38ce68       ca0ea1ee3cfd3       4 minutes ago        Running             kube-scheduler            0                   a1f596d0a7b61
	
	* 
	* ==> containerd <==
	* -- Logs begin at Wed 2022-09-21 22:04:48 UTC, end at Wed 2022-09-21 22:09:16 UTC. --
	Sep 21 22:05:15 embed-certs-20220921220439-10174 containerd[514]: time="2022-09-21T22:05:15.002533838Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 21 22:05:15 embed-certs-20220921220439-10174 containerd[514]: time="2022-09-21T22:05:15.002549752Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 21 22:05:15 embed-certs-20220921220439-10174 containerd[514]: time="2022-09-21T22:05:15.002783482Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/01f72770699dba9b47fb1159d7817070ac2f2241cf613a4dde6d31da7bd3e606 pid=1690 runtime=io.containerd.runc.v2
	Sep 21 22:05:15 embed-certs-20220921220439-10174 containerd[514]: time="2022-09-21T22:05:15.003627645Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 21 22:05:15 embed-certs-20220921220439-10174 containerd[514]: time="2022-09-21T22:05:15.003702185Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 21 22:05:15 embed-certs-20220921220439-10174 containerd[514]: time="2022-09-21T22:05:15.003757925Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 21 22:05:15 embed-certs-20220921220439-10174 containerd[514]: time="2022-09-21T22:05:15.003966032Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/165337b73f95e1dfb88d391a7e90f639e6952ff5ed8a25d8090fe5849dd46744 pid=1698 runtime=io.containerd.runc.v2
	Sep 21 22:05:15 embed-certs-20220921220439-10174 containerd[514]: time="2022-09-21T22:05:15.051004212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-s7c85,Uid:8fbb5ba1-1742-4f87-9204-633c80ba11ca,Namespace:kube-system,Attempt:0,} returns sandbox id \"165337b73f95e1dfb88d391a7e90f639e6952ff5ed8a25d8090fe5849dd46744\""
	Sep 21 22:05:15 embed-certs-20220921220439-10174 containerd[514]: time="2022-09-21T22:05:15.053930097Z" level=info msg="CreateContainer within sandbox \"165337b73f95e1dfb88d391a7e90f639e6952ff5ed8a25d8090fe5849dd46744\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
	Sep 21 22:05:15 embed-certs-20220921220439-10174 containerd[514]: time="2022-09-21T22:05:15.082729923Z" level=info msg="CreateContainer within sandbox \"165337b73f95e1dfb88d391a7e90f639e6952ff5ed8a25d8090fe5849dd46744\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2c132c99660ac3b6987754acaccbc87f631bc9ffc4dade2b77ad96eef8d04334\""
	Sep 21 22:05:15 embed-certs-20220921220439-10174 containerd[514]: time="2022-09-21T22:05:15.083477802Z" level=info msg="StartContainer for \"2c132c99660ac3b6987754acaccbc87f631bc9ffc4dade2b77ad96eef8d04334\""
	Sep 21 22:05:15 embed-certs-20220921220439-10174 containerd[514]: time="2022-09-21T22:05:15.181086689Z" level=info msg="StartContainer for \"2c132c99660ac3b6987754acaccbc87f631bc9ffc4dade2b77ad96eef8d04334\" returns successfully"
	Sep 21 22:05:15 embed-certs-20220921220439-10174 containerd[514]: time="2022-09-21T22:05:15.277523287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-mqr9d,Uid:1dcc030c-e4fc-498d-a309-94f66d79cd24,Namespace:kube-system,Attempt:0,} returns sandbox id \"01f72770699dba9b47fb1159d7817070ac2f2241cf613a4dde6d31da7bd3e606\""
	Sep 21 22:05:15 embed-certs-20220921220439-10174 containerd[514]: time="2022-09-21T22:05:15.280313402Z" level=info msg="CreateContainer within sandbox \"01f72770699dba9b47fb1159d7817070ac2f2241cf613a4dde6d31da7bd3e606\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:0,}"
	Sep 21 22:05:15 embed-certs-20220921220439-10174 containerd[514]: time="2022-09-21T22:05:15.296002353Z" level=info msg="CreateContainer within sandbox \"01f72770699dba9b47fb1159d7817070ac2f2241cf613a4dde6d31da7bd3e606\" for &ContainerMetadata{Name:kindnet-cni,Attempt:0,} returns container id \"ca78ef37b396f92bf9a289e39419479866b6196ad11a7c737fadf39a7a1d54a5\""
	Sep 21 22:05:15 embed-certs-20220921220439-10174 containerd[514]: time="2022-09-21T22:05:15.296503641Z" level=info msg="StartContainer for \"ca78ef37b396f92bf9a289e39419479866b6196ad11a7c737fadf39a7a1d54a5\""
	Sep 21 22:05:15 embed-certs-20220921220439-10174 containerd[514]: time="2022-09-21T22:05:15.481441445Z" level=info msg="StartContainer for \"ca78ef37b396f92bf9a289e39419479866b6196ad11a7c737fadf39a7a1d54a5\" returns successfully"
	Sep 21 22:07:56 embed-certs-20220921220439-10174 containerd[514]: time="2022-09-21T22:07:56.026988211Z" level=info msg="shim disconnected" id=ca78ef37b396f92bf9a289e39419479866b6196ad11a7c737fadf39a7a1d54a5
	Sep 21 22:07:56 embed-certs-20220921220439-10174 containerd[514]: time="2022-09-21T22:07:56.027057652Z" level=warning msg="cleaning up after shim disconnected" id=ca78ef37b396f92bf9a289e39419479866b6196ad11a7c737fadf39a7a1d54a5 namespace=k8s.io
	Sep 21 22:07:56 embed-certs-20220921220439-10174 containerd[514]: time="2022-09-21T22:07:56.027075872Z" level=info msg="cleaning up dead shim"
	Sep 21 22:07:56 embed-certs-20220921220439-10174 containerd[514]: time="2022-09-21T22:07:56.037153185Z" level=warning msg="cleanup warnings time=\"2022-09-21T22:07:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2114 runtime=io.containerd.runc.v2\n"
	Sep 21 22:07:56 embed-certs-20220921220439-10174 containerd[514]: time="2022-09-21T22:07:56.590636524Z" level=info msg="CreateContainer within sandbox \"01f72770699dba9b47fb1159d7817070ac2f2241cf613a4dde6d31da7bd3e606\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:1,}"
	Sep 21 22:07:56 embed-certs-20220921220439-10174 containerd[514]: time="2022-09-21T22:07:56.606018972Z" level=info msg="CreateContainer within sandbox \"01f72770699dba9b47fb1159d7817070ac2f2241cf613a4dde6d31da7bd3e606\" for &ContainerMetadata{Name:kindnet-cni,Attempt:1,} returns container id \"173ec9492ce0e14e57dc9b776c742ca7ea6b204dcfa3220d44a992a1b4db2cdc\""
	Sep 21 22:07:56 embed-certs-20220921220439-10174 containerd[514]: time="2022-09-21T22:07:56.606597807Z" level=info msg="StartContainer for \"173ec9492ce0e14e57dc9b776c742ca7ea6b204dcfa3220d44a992a1b4db2cdc\""
	Sep 21 22:07:56 embed-certs-20220921220439-10174 containerd[514]: time="2022-09-21T22:07:56.780836714Z" level=info msg="StartContainer for \"173ec9492ce0e14e57dc9b776c742ca7ea6b204dcfa3220d44a992a1b4db2cdc\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-20220921220439-10174
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-20220921220439-10174
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=937c68716dfaac5b5ffa3b6655158d5d3472b8c4
	                    minikube.k8s.io/name=embed-certs-20220921220439-10174
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_09_21T22_05_03_0700
	                    minikube.k8s.io/version=v1.27.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 21 Sep 2022 22:04:58 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-20220921220439-10174
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 21 Sep 2022 22:09:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 21 Sep 2022 22:05:12 +0000   Wed, 21 Sep 2022 22:04:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 21 Sep 2022 22:05:12 +0000   Wed, 21 Sep 2022 22:04:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 21 Sep 2022 22:05:12 +0000   Wed, 21 Sep 2022 22:04:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 21 Sep 2022 22:05:12 +0000   Wed, 21 Sep 2022 22:04:56 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    embed-certs-20220921220439-10174
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871748Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871748Ki
	  pods:               110
	System Info:
	  Machine ID:                 40026c506a9948afaae3b0fda0a61c83
	  System UUID:                39299add-007b-4517-8e1f-4d420ff2375f
	  Boot ID:                    878f3e16-9143-42f3-b848-1a740c7f26bf
	  Kernel Version:             5.15.0-1017-gcp
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.6.8
	  Kubelet Version:            v1.25.2
	  Kube-Proxy Version:         v1.25.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                        ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-embed-certs-20220921220439-10174                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         4m15s
	  kube-system                 kindnet-mqr9d                                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m2s
	  kube-system                 kube-apiserver-embed-certs-20220921220439-10174             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m14s
	  kube-system                 kube-controller-manager-embed-certs-20220921220439-10174    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m14s
	  kube-system                 kube-proxy-s7c85                                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m2s
	  kube-system                 kube-scheduler-embed-certs-20220921220439-10174             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m1s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  4m22s (x5 over 4m22s)  kubelet          Node embed-certs-20220921220439-10174 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m22s (x5 over 4m22s)  kubelet          Node embed-certs-20220921220439-10174 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m22s (x4 over 4m22s)  kubelet          Node embed-certs-20220921220439-10174 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m14s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m14s                  kubelet          Node embed-certs-20220921220439-10174 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m14s                  kubelet          Node embed-certs-20220921220439-10174 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m14s                  kubelet          Node embed-certs-20220921220439-10174 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m14s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m3s                   node-controller  Node embed-certs-20220921220439-10174 event: Registered Node embed-certs-20220921220439-10174 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +2.959858] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.003889] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.027920] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[ +23.932842] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.025490] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.019929] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +2.951847] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.015861] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.023931] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +2.959838] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.007878] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.023951] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	
	* 
	* ==> etcd [07e2b5e608591e85913066c0986a5bd5ea1bf1e68a095ae9fea95c89af2a5837] <==
	* {"level":"info","ts":"2022-09-21T22:04:55.692Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-09-21T22:04:55.692Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-09-21T22:04:55.692Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-09-21T22:04:56.579Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 1"}
	{"level":"info","ts":"2022-09-21T22:04:56.580Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-09-21T22:04:56.580Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 1"}
	{"level":"info","ts":"2022-09-21T22:04:56.580Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 2"}
	{"level":"info","ts":"2022-09-21T22:04:56.580Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2022-09-21T22:04:56.580Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 2"}
	{"level":"info","ts":"2022-09-21T22:04:56.580Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2022-09-21T22:04:56.580Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-21T22:04:56.581Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-21T22:04:56.581Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-21T22:04:56.581Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-21T22:04:56.581Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-09-21T22:04:56.581Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:embed-certs-20220921220439-10174 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2022-09-21T22:04:56.581Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-09-21T22:04:56.581Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-09-21T22:04:56.581Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-09-21T22:04:56.583Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-09-21T22:04:56.583Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	{"level":"warn","ts":"2022-09-21T22:09:10.633Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"111.024913ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/embed-certs-20220921220439-10174\" ","response":"range_response_count:1 size:4776"}
	{"level":"warn","ts":"2022-09-21T22:09:10.633Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"185.059944ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/default/kubernetes\" ","response":"range_response_count:1 size:420"}
	{"level":"info","ts":"2022-09-21T22:09:10.633Z","caller":"traceutil/trace.go:171","msg":"trace[1448235816] range","detail":"{range_begin:/registry/minions/embed-certs-20220921220439-10174; range_end:; response_count:1; response_revision:435; }","duration":"111.159312ms","start":"2022-09-21T22:09:10.521Z","end":"2022-09-21T22:09:10.633Z","steps":["trace[1448235816] 'agreement among raft nodes before linearized reading'  (duration: 14.689355ms)","trace[1448235816] 'range keys from in-memory index tree'  (duration: 96.284607ms)"],"step_count":2}
	{"level":"info","ts":"2022-09-21T22:09:10.633Z","caller":"traceutil/trace.go:171","msg":"trace[467199965] range","detail":"{range_begin:/registry/services/endpoints/default/kubernetes; range_end:; response_count:1; response_revision:435; }","duration":"185.216087ms","start":"2022-09-21T22:09:10.447Z","end":"2022-09-21T22:09:10.633Z","steps":["trace[467199965] 'agreement among raft nodes before linearized reading'  (duration: 88.71972ms)","trace[467199965] 'range keys from in-memory index tree'  (duration: 96.312032ms)"],"step_count":2}
	
	* 
	* ==> kernel <==
	*  22:09:16 up 51 min,  0 users,  load average: 2.81, 2.28, 2.12
	Linux embed-certs-20220921220439-10174 5.15.0-1017-gcp #23~20.04.2-Ubuntu SMP Wed Aug 17 02:46:40 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kube-apiserver [4c0ef4a5b32546e86e84aa28e2b53370eb4c462d47208c2f4053d8a94da4e5d0] <==
	* I0921 22:04:58.970037       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0921 22:04:58.970094       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0921 22:04:58.970121       1 apf_controller.go:305] Running API Priority and Fairness config worker
	I0921 22:04:58.970138       1 cache.go:39] Caches are synced for autoregister controller
	I0921 22:04:58.970794       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0921 22:04:58.975648       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0921 22:04:58.981933       1 controller.go:616] quota admission added evaluator for: namespaces
	I0921 22:04:58.982835       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0921 22:04:59.641473       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0921 22:04:59.874406       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0921 22:04:59.877433       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0921 22:04:59.877457       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0921 22:05:00.281431       1 controller.go:616] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0921 22:05:00.320037       1 controller.go:616] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0921 22:05:00.423949       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0921 22:05:00.430489       1 lease.go:250] Resetting endpoints for master service "kubernetes" to [192.168.67.2]
	I0921 22:05:00.431476       1 controller.go:616] quota admission added evaluator for: endpoints
	I0921 22:05:00.435394       1 controller.go:616] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0921 22:05:00.922548       1 controller.go:616] quota admission added evaluator for: serviceaccounts
	I0921 22:05:02.028149       1 controller.go:616] quota admission added evaluator for: deployments.apps
	I0921 22:05:02.035424       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0921 22:05:02.043930       1 controller.go:616] quota admission added evaluator for: daemonsets.apps
	I0921 22:05:02.121398       1 controller.go:616] quota admission added evaluator for: leases.coordination.k8s.io
	I0921 22:05:14.537469       1 controller.go:616] quota admission added evaluator for: replicasets.apps
	I0921 22:05:14.636887       1 controller.go:616] quota admission added evaluator for: controllerrevisions.apps
	
	* 
	* ==> kube-controller-manager [6dc0cbf3dcda3fe8512f7ac309f5980e6a0d33dedf1aafdf4d79890ef21016e9] <==
	* I0921 22:05:13.874626       1 shared_informer.go:262] Caches are synced for persistent volume
	I0921 22:05:13.886424       1 shared_informer.go:262] Caches are synced for resource quota
	I0921 22:05:13.917195       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
	I0921 22:05:13.918384       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0921 22:05:13.918417       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I0921 22:05:13.918466       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
	I0921 22:05:13.930142       1 shared_informer.go:262] Caches are synced for taint
	I0921 22:05:13.930254       1 taint_manager.go:204] "Starting NoExecuteTaintManager"
	I0921 22:05:13.930304       1 taint_manager.go:209] "Sending events to api server"
	I0921 22:05:13.930264       1 node_lifecycle_controller.go:1443] Initializing eviction metric for zone: 
	I0921 22:05:13.930331       1 event.go:294] "Event occurred" object="embed-certs-20220921220439-10174" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node embed-certs-20220921220439-10174 event: Registered Node embed-certs-20220921220439-10174 in Controller"
	W0921 22:05:13.930431       1 node_lifecycle_controller.go:1058] Missing timestamp for Node embed-certs-20220921220439-10174. Assuming now as a timestamp.
	I0921 22:05:13.930465       1 shared_informer.go:262] Caches are synced for daemon sets
	I0921 22:05:13.930488       1 node_lifecycle_controller.go:1209] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0921 22:05:13.983455       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I0921 22:05:14.306350       1 shared_informer.go:262] Caches are synced for garbage collector
	I0921 22:05:14.321838       1 shared_informer.go:262] Caches are synced for garbage collector
	I0921 22:05:14.321866       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0921 22:05:14.539394       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-565d847f94 to 2"
	I0921 22:05:14.642432       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-s7c85"
	I0921 22:05:14.643931       1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-mqr9d"
	I0921 22:05:14.793697       1 event.go:294] "Event occurred" object="kube-system/coredns-565d847f94" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-565d847f94-7xblt"
	I0921 22:05:14.797759       1 event.go:294] "Event occurred" object="kube-system/coredns-565d847f94" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-565d847f94-qn9gp"
	I0921 22:05:14.910189       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-565d847f94 to 1 from 2"
	I0921 22:05:14.921777       1 event.go:294] "Event occurred" object="kube-system/coredns-565d847f94" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-565d847f94-7xblt"
	
	* 
	* ==> kube-proxy [2c132c99660ac3b6987754acaccbc87f631bc9ffc4dade2b77ad96eef8d04334] <==
	* I0921 22:05:15.218738       1 node.go:163] Successfully retrieved node IP: 192.168.67.2
	I0921 22:05:15.218815       1 server_others.go:138] "Detected node IP" address="192.168.67.2"
	I0921 22:05:15.218851       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0921 22:05:15.238164       1 server_others.go:206] "Using iptables Proxier"
	I0921 22:05:15.238196       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0921 22:05:15.238214       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0921 22:05:15.238239       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0921 22:05:15.238267       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0921 22:05:15.238431       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0921 22:05:15.239025       1 server.go:661] "Version info" version="v1.25.2"
	I0921 22:05:15.239051       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0921 22:05:15.240122       1 config.go:317] "Starting service config controller"
	I0921 22:05:15.240165       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0921 22:05:15.240172       1 config.go:226] "Starting endpoint slice config controller"
	I0921 22:05:15.240184       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0921 22:05:15.240219       1 config.go:444] "Starting node config controller"
	I0921 22:05:15.240262       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0921 22:05:15.340574       1 shared_informer.go:262] Caches are synced for node config
	I0921 22:05:15.340602       1 shared_informer.go:262] Caches are synced for service config
	I0921 22:05:15.340643       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [50596ff38ce686be7705ce5777cdda9e90065d702d371e9f4371f62b19f49c34] <==
	* E0921 22:04:58.990827       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0921 22:04:58.990832       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0921 22:04:58.990783       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0921 22:04:58.990919       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0921 22:04:58.990919       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0921 22:04:58.990948       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0921 22:04:58.990679       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0921 22:04:58.990969       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0921 22:04:58.990956       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0921 22:04:58.991007       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0921 22:04:59.818146       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0921 22:04:59.818183       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0921 22:04:59.872480       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0921 22:04:59.872523       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0921 22:05:00.056545       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0921 22:05:00.056587       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0921 22:05:00.076767       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0921 22:05:00.076819       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0921 22:05:00.086747       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0921 22:05:00.086784       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0921 22:05:00.106863       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0921 22:05:00.106895       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0921 22:05:00.153313       1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0921 22:05:00.153358       1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0921 22:05:02.886016       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2022-09-21 22:04:48 UTC, end at Wed 2022-09-21 22:09:16 UTC. --
	Sep 21 22:07:17 embed-certs-20220921220439-10174 kubelet[1309]: E0921 22:07:17.436865    1309 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:07:22 embed-certs-20220921220439-10174 kubelet[1309]: E0921 22:07:22.437824    1309 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:07:27 embed-certs-20220921220439-10174 kubelet[1309]: E0921 22:07:27.438671    1309 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:07:32 embed-certs-20220921220439-10174 kubelet[1309]: E0921 22:07:32.439769    1309 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:07:37 embed-certs-20220921220439-10174 kubelet[1309]: E0921 22:07:37.440673    1309 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:07:42 embed-certs-20220921220439-10174 kubelet[1309]: E0921 22:07:42.441510    1309 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:07:47 embed-certs-20220921220439-10174 kubelet[1309]: E0921 22:07:47.442829    1309 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:07:52 embed-certs-20220921220439-10174 kubelet[1309]: E0921 22:07:52.443541    1309 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:07:56 embed-certs-20220921220439-10174 kubelet[1309]: I0921 22:07:56.586898    1309 scope.go:115] "RemoveContainer" containerID="ca78ef37b396f92bf9a289e39419479866b6196ad11a7c737fadf39a7a1d54a5"
	Sep 21 22:07:57 embed-certs-20220921220439-10174 kubelet[1309]: E0921 22:07:57.444235    1309 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:08:02 embed-certs-20220921220439-10174 kubelet[1309]: E0921 22:08:02.445881    1309 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:08:07 embed-certs-20220921220439-10174 kubelet[1309]: E0921 22:08:07.446768    1309 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:08:12 embed-certs-20220921220439-10174 kubelet[1309]: E0921 22:08:12.447604    1309 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:08:17 embed-certs-20220921220439-10174 kubelet[1309]: E0921 22:08:17.448920    1309 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:08:22 embed-certs-20220921220439-10174 kubelet[1309]: E0921 22:08:22.450323    1309 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:08:27 embed-certs-20220921220439-10174 kubelet[1309]: E0921 22:08:27.451192    1309 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:08:32 embed-certs-20220921220439-10174 kubelet[1309]: E0921 22:08:32.452811    1309 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:08:37 embed-certs-20220921220439-10174 kubelet[1309]: E0921 22:08:37.454079    1309 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:08:42 embed-certs-20220921220439-10174 kubelet[1309]: E0921 22:08:42.455234    1309 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:08:47 embed-certs-20220921220439-10174 kubelet[1309]: E0921 22:08:47.456144    1309 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:08:52 embed-certs-20220921220439-10174 kubelet[1309]: E0921 22:08:52.457098    1309 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:08:57 embed-certs-20220921220439-10174 kubelet[1309]: E0921 22:08:57.458517    1309 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:09:02 embed-certs-20220921220439-10174 kubelet[1309]: E0921 22:09:02.459266    1309 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:09:07 embed-certs-20220921220439-10174 kubelet[1309]: E0921 22:09:07.460242    1309 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:09:12 embed-certs-20220921220439-10174 kubelet[1309]: E0921 22:09:12.461540    1309 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20220921220439-10174 -n embed-certs-20220921220439-10174
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-20220921220439-10174 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: coredns-565d847f94-qn9gp storage-provisioner
helpers_test.go:272: ======> post-mortem[TestStartStop/group/embed-certs/serial/FirstStart]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context embed-certs-20220921220439-10174 describe pod coredns-565d847f94-qn9gp storage-provisioner
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context embed-certs-20220921220439-10174 describe pod coredns-565d847f94-qn9gp storage-provisioner: exit status 1 (56.259503ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-565d847f94-qn9gp" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context embed-certs-20220921220439-10174 describe pod coredns-565d847f94-qn9gp storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (277.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (364.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220921215523-10174 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220921215523-10174 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.126444898s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220921215523-10174 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220921215523-10174 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.125731137s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220921215523-10174 exec deployment/netcat -- nslookup kubernetes.default
E0921 22:05:48.931616   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/auto-20220921215523-10174/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220921215523-10174 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.126508196s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220921215523-10174 exec deployment/netcat -- nslookup kubernetes.default
E0921 22:06:02.147777   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/cilium-20220921215524-10174/client.crt: no such file or directory
E0921 22:06:02.153037   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/cilium-20220921215524-10174/client.crt: no such file or directory
E0921 22:06:02.163278   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/cilium-20220921215524-10174/client.crt: no such file or directory
E0921 22:06:02.183535   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/cilium-20220921215524-10174/client.crt: no such file or directory
E0921 22:06:02.223824   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/cilium-20220921215524-10174/client.crt: no such file or directory
E0921 22:06:02.304141   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/cilium-20220921215524-10174/client.crt: no such file or directory
E0921 22:06:02.464582   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/cilium-20220921215524-10174/client.crt: no such file or directory
E0921 22:06:02.784900   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/cilium-20220921215524-10174/client.crt: no such file or directory
E0921 22:06:03.425796   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/cilium-20220921215524-10174/client.crt: no such file or directory
E0921 22:06:03.573077   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/kindnet-20220921215523-10174/client.crt: no such file or directory
E0921 22:06:04.706861   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/cilium-20220921215524-10174/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220921215523-10174 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.150939402s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220921215523-10174 exec deployment/netcat -- nslookup kubernetes.default
E0921 22:06:22.629634   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/cilium-20220921215524-10174/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220921215523-10174 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.13415254s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0921 22:06:38.504662   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/ingress-addon-legacy-20220921213512-10174/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220921215523-10174 exec deployment/netcat -- nslookup kubernetes.default
E0921 22:06:43.110174   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/cilium-20220921215524-10174/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220921215523-10174 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.136327869s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220921215523-10174 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220921215523-10174 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.146383499s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220921215523-10174 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220921215523-10174 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.12800013s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220921215523-10174 exec deployment/netcat -- nslookup kubernetes.default
E0921 22:08:11.494464   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/addons-20220921212740-10174/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220921215523-10174 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.130118354s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220921215523-10174 exec deployment/netcat -- nslookup kubernetes.default
E0921 22:08:45.991550   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/cilium-20220921215524-10174/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220921215523-10174 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.131987172s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220921215523-10174 exec deployment/netcat -- nslookup kubernetes.default
E0921 22:09:20.481846   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/functional-20220921213235-10174/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220921215523-10174 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.129740633s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0921 22:09:41.650930   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/kindnet-20220921215523-10174/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220921215523-10174 exec deployment/netcat -- nslookup kubernetes.default
E0921 22:11:02.147214   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/cilium-20220921215524-10174/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220921215523-10174 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.127921225s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: failed to do nslookup on kubernetes.default: exit status 1
net_test.go:180: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/enable-default-cni/DNS (364.78s)
E0921 22:19:20.481713   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/functional-20220921213235-10174/client.crt: no such file or directory
E0921 22:19:21.247166   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/old-k8s-version-20220921220722-10174/client.crt: no such file or directory
E0921 22:19:21.252438   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/old-k8s-version-20220921220722-10174/client.crt: no such file or directory
E0921 22:19:21.262759   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/old-k8s-version-20220921220722-10174/client.crt: no such file or directory
E0921 22:19:21.283034   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/old-k8s-version-20220921220722-10174/client.crt: no such file or directory
E0921 22:19:21.323341   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/old-k8s-version-20220921220722-10174/client.crt: no such file or directory
E0921 22:19:21.404268   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/old-k8s-version-20220921220722-10174/client.crt: no such file or directory
E0921 22:19:21.564644   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/old-k8s-version-20220921220722-10174/client.crt: no such file or directory
E0921 22:19:21.884830   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/old-k8s-version-20220921220722-10174/client.crt: no such file or directory
E0921 22:19:22.525595   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/old-k8s-version-20220921220722-10174/client.crt: no such file or directory
E0921 22:19:23.806506   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/old-k8s-version-20220921220722-10174/client.crt: no such file or directory
E0921 22:19:26.366694   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/old-k8s-version-20220921220722-10174/client.crt: no such file or directory
E0921 22:19:27.009557   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/auto-20220921215523-10174/client.crt: no such file or directory
E0921 22:19:31.487061   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/old-k8s-version-20220921220722-10174/client.crt: no such file or directory
E0921 22:19:41.651170   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/kindnet-20220921215523-10174/client.crt: no such file or directory
E0921 22:19:41.727340   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/old-k8s-version-20220921220722-10174/client.crt: no such file or directory
E0921 22:19:58.905689   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/enable-default-cni-20220921215523-10174/client.crt: no such file or directory
E0921 22:20:02.208115   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/old-k8s-version-20220921220722-10174/client.crt: no such file or directory
E0921 22:20:08.448429   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/addons-20220921212740-10174/client.crt: no such file or directory
E0921 22:20:26.589475   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/enable-default-cni-20220921215523-10174/client.crt: no such file or directory
E0921 22:20:43.168362   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/old-k8s-version-20220921220722-10174/client.crt: no such file or directory
E0921 22:20:50.053879   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/auto-20220921215523-10174/client.crt: no such file or directory
E0921 22:21:02.146820   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/cilium-20220921215524-10174/client.crt: no such file or directory
E0921 22:21:04.694713   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/kindnet-20220921215523-10174/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (283.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-20220921220832-10174 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.2

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p no-preload-20220921220832-10174 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.2: exit status 80 (4m41.320762087s)

                                                
                                                
-- stdout --
	* [no-preload-20220921220832-10174] minikube v1.27.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14995
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting control plane node no-preload-20220921220832-10174 in cluster no-preload-20220921220832-10174
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.25.2 on containerd 1.6.8 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0921 22:08:32.091715  242109 out.go:296] Setting OutFile to fd 1 ...
	I0921 22:08:32.091884  242109 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:08:32.091894  242109 out.go:309] Setting ErrFile to fd 2...
	I0921 22:08:32.091899  242109 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:08:32.091992  242109 root.go:333] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/bin
	I0921 22:08:32.092577  242109 out.go:303] Setting JSON to false
	I0921 22:08:32.094002  242109 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3063,"bootTime":1663795049,"procs":521,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1017-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0921 22:08:32.094075  242109 start.go:125] virtualization: kvm guest
	I0921 22:08:32.096710  242109 out.go:177] * [no-preload-20220921220832-10174] minikube v1.27.0 on Ubuntu 20.04 (kvm/amd64)
	I0921 22:08:32.098227  242109 out.go:177]   - MINIKUBE_LOCATION=14995
	I0921 22:08:32.098237  242109 notify.go:214] Checking for updates...
	I0921 22:08:32.099707  242109 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0921 22:08:32.101331  242109 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
	I0921 22:08:32.103017  242109 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube
	I0921 22:08:32.104848  242109 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0921 22:08:32.106542  242109 config.go:180] Loaded profile config "embed-certs-20220921220439-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.2
	I0921 22:08:32.106655  242109 config.go:180] Loaded profile config "enable-default-cni-20220921215523-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.2
	I0921 22:08:32.106779  242109 config.go:180] Loaded profile config "old-k8s-version-20220921220722-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0921 22:08:32.106858  242109 driver.go:365] Setting default libvirt URI to qemu:///system
	I0921 22:08:32.139816  242109 docker.go:137] docker version: linux-20.10.18
	I0921 22:08:32.139902  242109 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:08:32.232213  242109 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:49 SystemTime:2022-09-21 22:08:32.16163627 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1017-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.18 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientI
nfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0921 22:08:32.232321  242109 docker.go:254] overlay module found
	I0921 22:08:32.234175  242109 out.go:177] * Using the docker driver based on user configuration
	I0921 22:08:32.235555  242109 start.go:284] selected driver: docker
	I0921 22:08:32.235579  242109 start.go:808] validating driver "docker" against <nil>
	I0921 22:08:32.235600  242109 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0921 22:08:32.236602  242109 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:08:32.331368  242109 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:49 SystemTime:2022-09-21 22:08:32.257384866 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1017-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.18 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0921 22:08:32.331533  242109 start_flags.go:302] no existing cluster config was found, will generate one from the flags 
	I0921 22:08:32.331680  242109 start_flags.go:867] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0921 22:08:32.333663  242109 out.go:177] * Using Docker driver with root privileges
	I0921 22:08:32.335123  242109 cni.go:95] Creating CNI manager for ""
	I0921 22:08:32.335145  242109 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0921 22:08:32.335167  242109 start_flags.go:311] Found "CNI" CNI - setting NetworkPlugin=cni
	I0921 22:08:32.335186  242109 start_flags.go:316] config:
	{Name:no-preload-20220921220832-10174 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:no-preload-20220921220832-10174 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 22:08:32.336990  242109 out.go:177] * Starting control plane node no-preload-20220921220832-10174 in cluster no-preload-20220921220832-10174
	I0921 22:08:32.338434  242109 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0921 22:08:32.339908  242109 out.go:177] * Pulling base image ...
	I0921 22:08:32.341304  242109 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime containerd
	I0921 22:08:32.341332  242109 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	I0921 22:08:32.341430  242109 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/no-preload-20220921220832-10174/config.json ...
	I0921 22:08:32.341468  242109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/no-preload-20220921220832-10174/config.json: {Name:mk8234a18099321a9a3e41526d762960614698ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:08:32.341583  242109 cache.go:107] acquiring lock: {Name:mk964a2e66a5444defeab854e6434a6f27bdb527 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:08:32.341604  242109 cache.go:107] acquiring lock: {Name:mk0eb3fbf1ee9e76ad78bfdee22277edae17ed2a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:08:32.341628  242109 cache.go:107] acquiring lock: {Name:mk944562b9b2415f3d8e7ad36b373f92205bdb5c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:08:32.341694  242109 cache.go:107] acquiring lock: {Name:mka10a341c76ae214d12cf65b1bbb970ff641c5f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:08:32.341730  242109 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.4-0 exists
	I0921 22:08:32.341677  242109 cache.go:107] acquiring lock: {Name:mk6ae321142fb89935897137e30217f9ae2499ae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:08:32.341737  242109 cache.go:107] acquiring lock: {Name:mkb5c943b9da9e6c7ecc443b377ab990272f1b2f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:08:32.341746  242109 cache.go:107] acquiring lock: {Name:mk4fab6516978f221b8246a61f380f8ab97f066c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:08:32.341680  242109 cache.go:107] acquiring lock: {Name:mkee4799116b59e3f65d0127cdad0c25a01a05e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:08:32.341783  242109 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.25.2 exists
	I0921 22:08:32.341788  242109 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.25.2 exists
	I0921 22:08:32.341791  242109 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0921 22:08:32.341805  242109 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.25.2" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.25.2" took 211.656µs
	I0921 22:08:32.341807  242109 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.25.2" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.25.2" took 148.262µs
	I0921 22:08:32.341812  242109 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.25.2 exists
	I0921 22:08:32.341816  242109 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.25.2 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.25.2 succeeded
	I0921 22:08:32.341821  242109 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.25.2 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.25.2 succeeded
	I0921 22:08:32.341808  242109 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 233.334µs
	I0921 22:08:32.341830  242109 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.25.2" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.25.2" took 162.443µs
	I0921 22:08:32.341847  242109 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.25.2 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.25.2 succeeded
	I0921 22:08:32.341846  242109 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.9.3 exists
	I0921 22:08:32.341838  242109 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0921 22:08:32.341824  242109 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/pause_3.8 exists
	I0921 22:08:32.341876  242109 cache.go:96] cache image "registry.k8s.io/pause:3.8" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/pause_3.8" took 204.098µs
	I0921 22:08:32.341874  242109 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.9.3" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.9.3" took 192.241µs
	I0921 22:08:32.341753  242109 cache.go:96] cache image "registry.k8s.io/etcd:3.5.4-0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.4-0" took 136.592µs
	I0921 22:08:32.341891  242109 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.4-0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.4-0 succeeded
	I0921 22:08:32.341885  242109 cache.go:80] save to tar file registry.k8s.io/pause:3.8 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/pause_3.8 succeeded
	I0921 22:08:32.341891  242109 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.9.3 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.9.3 succeeded
	I0921 22:08:32.341823  242109 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.25.2 exists
	I0921 22:08:32.341914  242109 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.25.2" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.25.2" took 286.498µs
	I0921 22:08:32.341930  242109 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.25.2 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.25.2 succeeded
	I0921 22:08:32.341939  242109 cache.go:87] Successfully saved all images to host disk.
	I0921 22:08:32.366473  242109 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon, skipping pull
	I0921 22:08:32.366496  242109 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c exists in daemon, skipping load
	I0921 22:08:32.366514  242109 cache.go:208] Successfully downloaded all kic artifacts
	I0921 22:08:32.366548  242109 start.go:364] acquiring machines lock for no-preload-20220921220832-10174: {Name:mk189db360f5ac486cb35206c34214af6d1c65b0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:08:32.366677  242109 start.go:368] acquired machines lock for "no-preload-20220921220832-10174" in 107.952µs
	I0921 22:08:32.366708  242109 start.go:93] Provisioning new machine with config: &{Name:no-preload-20220921220832-10174 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:no-preload-20220921220832-10174 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0921 22:08:32.366803  242109 start.go:125] createHost starting for "" (driver="docker")
	I0921 22:08:32.369209  242109 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0921 22:08:32.369459  242109 start.go:159] libmachine.API.Create for "no-preload-20220921220832-10174" (driver="docker")
	I0921 22:08:32.369482  242109 client.go:168] LocalClient.Create starting
	I0921 22:08:32.369604  242109 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem
	I0921 22:08:32.369644  242109 main.go:134] libmachine: Decoding PEM data...
	I0921 22:08:32.369665  242109 main.go:134] libmachine: Parsing certificate...
	I0921 22:08:32.369721  242109 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/cert.pem
	I0921 22:08:32.369753  242109 main.go:134] libmachine: Decoding PEM data...
	I0921 22:08:32.369769  242109 main.go:134] libmachine: Parsing certificate...
	I0921 22:08:32.370156  242109 cli_runner.go:164] Run: docker network inspect no-preload-20220921220832-10174 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:08:32.394560  242109 cli_runner.go:211] docker network inspect no-preload-20220921220832-10174 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:08:32.394644  242109 network_create.go:272] running [docker network inspect no-preload-20220921220832-10174] to gather additional debugging logs...
	I0921 22:08:32.394665  242109 cli_runner.go:164] Run: docker network inspect no-preload-20220921220832-10174
	W0921 22:08:32.417810  242109 cli_runner.go:211] docker network inspect no-preload-20220921220832-10174 returned with exit code 1
	I0921 22:08:32.417843  242109 network_create.go:275] error running [docker network inspect no-preload-20220921220832-10174]: docker network inspect no-preload-20220921220832-10174: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: no-preload-20220921220832-10174
	I0921 22:08:32.417860  242109 network_create.go:277] output of [docker network inspect no-preload-20220921220832-10174]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: no-preload-20220921220832-10174
	
	** /stderr **
	I0921 22:08:32.417923  242109 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 22:08:32.443298  242109 network.go:241] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-b7c23e57d062 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:a3:39:9d:03}}
	I0921 22:08:32.444161  242109 network.go:241] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-bfa8cb3d5f9b IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:8c:39:36:0c}}
	I0921 22:08:32.444798  242109 network.go:241] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName:br-e71aa30fd3ac IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:7a:b1:c8:c1}}
	I0921 22:08:32.445526  242109 network.go:241] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName:br-4f93bc2f061a IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:ca:b2:42:ce}}
	I0921 22:08:32.446196  242109 network.go:241] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 Interface:{IfaceName:br-4878e8461754 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:02:42:13:30:5c:d0}}
	I0921 22:08:32.447143  242109 network.go:290] reserving subnet 192.168.94.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.94.0:0xc0004de080] misses:0}
	I0921 22:08:32.447176  242109 network.go:236] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 22:08:32.447187  242109 network_create.go:115] attempt to create docker network no-preload-20220921220832-10174 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I0921 22:08:32.447236  242109 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-20220921220832-10174 no-preload-20220921220832-10174
	I0921 22:08:32.508336  242109 network_create.go:99] docker network no-preload-20220921220832-10174 192.168.94.0/24 created
	I0921 22:08:32.508374  242109 kic.go:106] calculated static IP "192.168.94.2" for the "no-preload-20220921220832-10174" container
	I0921 22:08:32.508432  242109 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0921 22:08:32.534911  242109 cli_runner.go:164] Run: docker volume create no-preload-20220921220832-10174 --label name.minikube.sigs.k8s.io=no-preload-20220921220832-10174 --label created_by.minikube.sigs.k8s.io=true
	I0921 22:08:32.559222  242109 oci.go:103] Successfully created a docker volume no-preload-20220921220832-10174
	I0921 22:08:32.559322  242109 cli_runner.go:164] Run: docker run --rm --name no-preload-20220921220832-10174-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-20220921220832-10174 --entrypoint /usr/bin/test -v no-preload-20220921220832-10174:/var gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c -d /var/lib
	I0921 22:08:33.139204  242109 oci.go:107] Successfully prepared a docker volume no-preload-20220921220832-10174
	I0921 22:08:33.139255  242109 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime containerd
	W0921 22:08:33.139369  242109 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0921 22:08:33.139459  242109 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0921 22:08:33.234984  242109 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-20220921220832-10174 --name no-preload-20220921220832-10174 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-20220921220832-10174 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-20220921220832-10174 --network no-preload-20220921220832-10174 --ip 192.168.94.2 --volume no-preload-20220921220832-10174:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c
	I0921 22:08:33.616738  242109 cli_runner.go:164] Run: docker container inspect no-preload-20220921220832-10174 --format={{.State.Running}}
	I0921 22:08:33.645764  242109 cli_runner.go:164] Run: docker container inspect no-preload-20220921220832-10174 --format={{.State.Status}}
	I0921 22:08:33.671906  242109 cli_runner.go:164] Run: docker exec no-preload-20220921220832-10174 stat /var/lib/dpkg/alternatives/iptables
	I0921 22:08:33.749023  242109 oci.go:144] the created container "no-preload-20220921220832-10174" has a running status.
	I0921 22:08:33.749062  242109 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/no-preload-20220921220832-10174/id_rsa...
	I0921 22:08:33.954020  242109 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/no-preload-20220921220832-10174/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0921 22:08:34.034359  242109 cli_runner.go:164] Run: docker container inspect no-preload-20220921220832-10174 --format={{.State.Status}}
	I0921 22:08:34.061636  242109 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0921 22:08:34.061659  242109 kic_runner.go:114] Args: [docker exec --privileged no-preload-20220921220832-10174 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0921 22:08:34.136878  242109 cli_runner.go:164] Run: docker container inspect no-preload-20220921220832-10174 --format={{.State.Status}}
	I0921 22:08:34.163571  242109 machine.go:88] provisioning docker machine ...
	I0921 22:08:34.163605  242109 ubuntu.go:169] provisioning hostname "no-preload-20220921220832-10174"
	I0921 22:08:34.163657  242109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220832-10174
	I0921 22:08:34.189546  242109 main.go:134] libmachine: Using SSH client type: native
	I0921 22:08:34.189772  242109 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ec520] 0x7ef6a0 <nil>  [] 0s} 127.0.0.1 49408 <nil> <nil>}
	I0921 22:08:34.189795  242109 main.go:134] libmachine: About to run SSH command:
	sudo hostname no-preload-20220921220832-10174 && echo "no-preload-20220921220832-10174" | sudo tee /etc/hostname
	I0921 22:08:34.327956  242109 main.go:134] libmachine: SSH cmd err, output: <nil>: no-preload-20220921220832-10174
	
	I0921 22:08:34.328041  242109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220832-10174
	I0921 22:08:34.354338  242109 main.go:134] libmachine: Using SSH client type: native
	I0921 22:08:34.354493  242109 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ec520] 0x7ef6a0 <nil>  [] 0s} 127.0.0.1 49408 <nil> <nil>}
	I0921 22:08:34.354515  242109 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-20220921220832-10174' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-20220921220832-10174/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-20220921220832-10174' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0921 22:08:34.483530  242109 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0921 22:08:34.483570  242109 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube}
	I0921 22:08:34.483615  242109 ubuntu.go:177] setting up certificates
	I0921 22:08:34.483625  242109 provision.go:83] configureAuth start
	I0921 22:08:34.483683  242109 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20220921220832-10174
	I0921 22:08:34.511261  242109 provision.go:138] copyHostCerts
	I0921 22:08:34.511329  242109 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.pem, removing ...
	I0921 22:08:34.511341  242109 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.pem
	I0921 22:08:34.511416  242109 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.pem (1078 bytes)
	I0921 22:08:34.511514  242109 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cert.pem, removing ...
	I0921 22:08:34.511530  242109 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cert.pem
	I0921 22:08:34.511571  242109 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cert.pem (1123 bytes)
	I0921 22:08:34.511683  242109 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/key.pem, removing ...
	I0921 22:08:34.511702  242109 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/key.pem
	I0921 22:08:34.511774  242109 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/key.pem (1675 bytes)
	I0921 22:08:34.511857  242109 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca-key.pem org=jenkins.no-preload-20220921220832-10174 san=[192.168.94.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-20220921220832-10174]
	I0921 22:08:34.690415  242109 provision.go:172] copyRemoteCerts
	I0921 22:08:34.690468  242109 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0921 22:08:34.690855  242109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220832-10174
	I0921 22:08:34.716097  242109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49408 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/no-preload-20220921220832-10174/id_rsa Username:docker}
	I0921 22:08:34.807600  242109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0921 22:08:34.826933  242109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0921 22:08:34.844386  242109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0921 22:08:34.861468  242109 provision.go:86] duration metric: configureAuth took 377.832384ms
	I0921 22:08:34.861491  242109 ubuntu.go:193] setting minikube options for container-runtime
	I0921 22:08:34.861655  242109 config.go:180] Loaded profile config "no-preload-20220921220832-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.2
	I0921 22:08:34.861668  242109 machine.go:91] provisioned docker machine in 698.07649ms
	I0921 22:08:34.861675  242109 client.go:171] LocalClient.Create took 2.49218544s
	I0921 22:08:34.861696  242109 start.go:167] duration metric: libmachine.API.Create for "no-preload-20220921220832-10174" took 2.492236327s
	I0921 22:08:34.861710  242109 start.go:300] post-start starting for "no-preload-20220921220832-10174" (driver="docker")
	I0921 22:08:34.861721  242109 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0921 22:08:34.861758  242109 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0921 22:08:34.861812  242109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220832-10174
	I0921 22:08:34.886578  242109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49408 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/no-preload-20220921220832-10174/id_rsa Username:docker}
	I0921 22:08:34.979345  242109 ssh_runner.go:195] Run: cat /etc/os-release
	I0921 22:08:34.982130  242109 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0921 22:08:34.982152  242109 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0921 22:08:34.982162  242109 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0921 22:08:34.982168  242109 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0921 22:08:34.982186  242109 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/addons for local assets ...
	I0921 22:08:34.982233  242109 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files for local assets ...
	I0921 22:08:34.982302  242109 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/101742.pem -> 101742.pem in /etc/ssl/certs
	I0921 22:08:34.982377  242109 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0921 22:08:34.988919  242109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/101742.pem --> /etc/ssl/certs/101742.pem (1708 bytes)
	I0921 22:08:35.005937  242109 start.go:303] post-start completed in 144.212626ms
	I0921 22:08:35.006269  242109 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20220921220832-10174
	I0921 22:08:35.031597  242109 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/no-preload-20220921220832-10174/config.json ...
	I0921 22:08:35.031860  242109 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:08:35.031899  242109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220832-10174
	I0921 22:08:35.057271  242109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49408 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/no-preload-20220921220832-10174/id_rsa Username:docker}
	I0921 22:08:35.144359  242109 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:08:35.148566  242109 start.go:128] duration metric: createHost completed in 2.781750164s
	I0921 22:08:35.148594  242109 start.go:83] releasing machines lock for "no-preload-20220921220832-10174", held for 2.781899801s
	I0921 22:08:35.148673  242109 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20220921220832-10174
	I0921 22:08:35.174873  242109 ssh_runner.go:195] Run: systemctl --version
	I0921 22:08:35.174925  242109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220832-10174
	I0921 22:08:35.174956  242109 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0921 22:08:35.175024  242109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220832-10174
	I0921 22:08:35.201765  242109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49408 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/no-preload-20220921220832-10174/id_rsa Username:docker}
	I0921 22:08:35.203707  242109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49408 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/no-preload-20220921220832-10174/id_rsa Username:docker}
	I0921 22:08:35.321934  242109 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0921 22:08:35.332356  242109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0921 22:08:35.342047  242109 docker.go:188] disabling docker service ...
	I0921 22:08:35.342105  242109 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0921 22:08:35.360459  242109 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0921 22:08:35.370066  242109 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0921 22:08:35.448272  242109 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0921 22:08:35.530327  242109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0921 22:08:35.539747  242109 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0921 22:08:35.552260  242109 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "registry.k8s.io/pause:3.8"|' -i /etc/containerd/config.toml"
	I0921 22:08:35.560238  242109 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0921 22:08:35.568221  242109 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0921 22:08:35.575657  242109 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0921 22:08:35.582966  242109 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0921 22:08:35.589047  242109 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0921 22:08:35.595109  242109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0921 22:08:35.673595  242109 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0921 22:08:35.752751  242109 start.go:450] Will wait 60s for socket path /run/containerd/containerd.sock
	I0921 22:08:35.752814  242109 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0921 22:08:35.756381  242109 start.go:471] Will wait 60s for crictl version
	I0921 22:08:35.756424  242109 ssh_runner.go:195] Run: sudo crictl version
	I0921 22:08:35.780466  242109 start.go:480] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.8
	RuntimeApiVersion:  v1alpha2
	I0921 22:08:35.780526  242109 ssh_runner.go:195] Run: containerd --version
	I0921 22:08:35.811401  242109 ssh_runner.go:195] Run: containerd --version
	I0921 22:08:35.844655  242109 out.go:177] * Preparing Kubernetes v1.25.2 on containerd 1.6.8 ...
	I0921 22:08:35.846055  242109 cli_runner.go:164] Run: docker network inspect no-preload-20220921220832-10174 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 22:08:35.869527  242109 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0921 22:08:35.872805  242109 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0921 22:08:35.882612  242109 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime containerd
	I0921 22:08:35.882651  242109 ssh_runner.go:195] Run: sudo crictl images --output json
	I0921 22:08:35.904673  242109 containerd.go:549] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.25.2". assuming images are not preloaded.
	I0921 22:08:35.904696  242109 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.25.2 registry.k8s.io/kube-controller-manager:v1.25.2 registry.k8s.io/kube-scheduler:v1.25.2 registry.k8s.io/kube-proxy:v1.25.2 registry.k8s.io/pause:3.8 registry.k8s.io/etcd:3.5.4-0 registry.k8s.io/coredns/coredns:v1.9.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0921 22:08:35.904767  242109 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.25.2
	I0921 22:08:35.904796  242109 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.25.2
	I0921 22:08:35.904812  242109 image.go:134] retrieving image: registry.k8s.io/pause:3.8
	I0921 22:08:35.904824  242109 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.25.2
	I0921 22:08:35.904815  242109 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.4-0
	I0921 22:08:35.904796  242109 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.9.3
	I0921 22:08:35.904779  242109 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.25.2
	I0921 22:08:35.904773  242109 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0921 22:08:35.905925  242109 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0921 22:08:35.905934  242109 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.25.2: Error: No such image: registry.k8s.io/kube-controller-manager:v1.25.2
	I0921 22:08:35.905924  242109 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.25.2: Error: No such image: registry.k8s.io/kube-proxy:v1.25.2
	I0921 22:08:35.905951  242109 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.25.2: Error: No such image: registry.k8s.io/kube-apiserver:v1.25.2
	I0921 22:08:35.905920  242109 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.25.2: Error: No such image: registry.k8s.io/kube-scheduler:v1.25.2
	I0921 22:08:35.905934  242109 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.4-0: Error: No such image: registry.k8s.io/etcd:3.5.4-0
	I0921 22:08:35.905935  242109 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.9.3: Error: No such image: registry.k8s.io/coredns/coredns:v1.9.3
	I0921 22:08:35.905937  242109 image.go:177] daemon lookup for registry.k8s.io/pause:3.8: Error: No such image: registry.k8s.io/pause:3.8
	I0921 22:08:36.396123  242109 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/pause:3.8"
	I0921 22:08:36.418515  242109 cache_images.go:116] "registry.k8s.io/pause:3.8" needs transfer: "registry.k8s.io/pause:3.8" does not exist at hash "4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517" in container runtime
	I0921 22:08:36.418560  242109 cri.go:216] Removing image: registry.k8s.io/pause:3.8
	I0921 22:08:36.418594  242109 ssh_runner.go:195] Run: which crictl
	I0921 22:08:36.421313  242109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.8
	I0921 22:08:36.435188  242109 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/etcd:3.5.4-0"
	I0921 22:08:36.445557  242109 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/pause_3.8
	I0921 22:08:36.445641  242109 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.8
	I0921 22:08:36.448412  242109 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-apiserver:v1.25.2"
	I0921 22:08:36.457433  242109 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-proxy:v1.25.2"
	I0921 22:08:36.457801  242109 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.8: stat -c "%s %y" /var/lib/minikube/images/pause_3.8: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/pause_3.8': No such file or directory
	I0921 22:08:36.457831  242109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/pause_3.8 --> /var/lib/minikube/images/pause_3.8 (311296 bytes)
	I0921 22:08:36.457836  242109 cache_images.go:116] "registry.k8s.io/etcd:3.5.4-0" needs transfer: "registry.k8s.io/etcd:3.5.4-0" does not exist at hash "a8a176a5d5d698f9409dc246f81fa69d37d4a2f4132ba5e62e72a78476b27f66" in container runtime
	I0921 22:08:36.457881  242109 cri.go:216] Removing image: registry.k8s.io/etcd:3.5.4-0
	I0921 22:08:36.457922  242109 ssh_runner.go:195] Run: which crictl
	I0921 22:08:36.460659  242109 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-scheduler:v1.25.2"
	I0921 22:08:36.463542  242109 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/coredns/coredns:v1.9.3"
	I0921 22:08:36.479691  242109 containerd.go:233] Loading image: /var/lib/minikube/images/pause_3.8
	I0921 22:08:36.479829  242109 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.8
	I0921 22:08:36.485349  242109 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.25.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.25.2" does not exist at hash "97801f83949087fbdcc09b1c84ddda0ed5d01f4aabd17787a7714eb2796082b3" in container runtime
	I0921 22:08:36.485408  242109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.4-0
	I0921 22:08:36.485441  242109 cri.go:216] Removing image: registry.k8s.io/kube-apiserver:v1.25.2
	I0921 22:08:36.485482  242109 ssh_runner.go:195] Run: which crictl
	I0921 22:08:36.485349  242109 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.25.2" needs transfer: "registry.k8s.io/kube-proxy:v1.25.2" does not exist at hash "1c7d8c51823b5eb08189d553d911097ec8a6a40fea40bb5bdea91842f30d2e86" in container runtime
	I0921 22:08:36.485551  242109 cri.go:216] Removing image: registry.k8s.io/kube-proxy:v1.25.2
	I0921 22:08:36.485599  242109 ssh_runner.go:195] Run: which crictl
	I0921 22:08:36.487598  242109 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.25.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.25.2" does not exist at hash "ca0ea1ee3cfd3d1ced15a8e6f4a236a436c5733b20a0b2dbbfbfd59977e12959" in container runtime
	I0921 22:08:36.487632  242109 cri.go:216] Removing image: registry.k8s.io/kube-scheduler:v1.25.2
	I0921 22:08:36.487660  242109 ssh_runner.go:195] Run: which crictl
	I0921 22:08:36.493591  242109 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.9.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.9.3" does not exist at hash "5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a" in container runtime
	I0921 22:08:36.493629  242109 cri.go:216] Removing image: registry.k8s.io/coredns/coredns:v1.9.3
	I0921 22:08:36.493657  242109 ssh_runner.go:195] Run: which crictl
	I0921 22:08:36.506710  242109 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-controller-manager:v1.25.2"
	I0921 22:08:36.626371  242109 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/pause_3.8 from cache
	I0921 22:08:36.626472  242109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.25.2
	I0921 22:08:36.626536  242109 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.4-0
	I0921 22:08:36.626616  242109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.25.2
	I0921 22:08:36.626623  242109 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.4-0
	I0921 22:08:36.626652  242109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.25.2
	I0921 22:08:36.626691  242109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.9.3
	I0921 22:08:36.626719  242109 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.25.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.25.2" does not exist at hash "dbfceb93c69b6d85661fe46c3e50de9e927e4895ebba2892a1db116e69c81890" in container runtime
	I0921 22:08:36.626756  242109 cri.go:216] Removing image: registry.k8s.io/kube-controller-manager:v1.25.2
	I0921 22:08:36.626784  242109 ssh_runner.go:195] Run: which crictl
	I0921 22:08:36.677935  242109 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.25.2
	I0921 22:08:36.678001  242109 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.25.2
	I0921 22:08:36.678015  242109 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/etcd_3.5.4-0': No such file or directory
	I0921 22:08:36.678033  242109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.4-0 --> /var/lib/minikube/images/etcd_3.5.4-0 (102160384 bytes)
	I0921 22:08:36.678081  242109 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.9.3
	I0921 22:08:36.678037  242109 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.25.2
	I0921 22:08:36.678082  242109 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.25.2
	I0921 22:08:36.678151  242109 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.9.3
	I0921 22:08:36.683332  242109 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.25.2
	I0921 22:08:36.683384  242109 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.9.3: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.9.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/coredns_v1.9.3': No such file or directory
	I0921 22:08:36.683420  242109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.9.3 --> /var/lib/minikube/images/coredns_v1.9.3 (14839296 bytes)
	I0921 22:08:36.683447  242109 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.25.2
	I0921 22:08:36.683467  242109 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.25.2: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.25.2: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-apiserver_v1.25.2': No such file or directory
	I0921 22:08:36.683488  242109 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.25.2: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.25.2: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-proxy_v1.25.2': No such file or directory
	I0921 22:08:36.683346  242109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.25.2
	I0921 22:08:36.683509  242109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.25.2 --> /var/lib/minikube/images/kube-proxy_v1.25.2 (20265472 bytes)
	I0921 22:08:36.683491  242109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.25.2 --> /var/lib/minikube/images/kube-apiserver_v1.25.2 (34238464 bytes)
	I0921 22:08:36.777292  242109 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.25.2
	I0921 22:08:36.777342  242109 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.25.2: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.25.2: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-scheduler_v1.25.2': No such file or directory
	I0921 22:08:36.777377  242109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.25.2 --> /var/lib/minikube/images/kube-scheduler_v1.25.2 (15798784 bytes)
	I0921 22:08:36.777402  242109 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.25.2
	I0921 22:08:36.814198  242109 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.25.2: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.25.2: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-controller-manager_v1.25.2': No such file or directory
	I0921 22:08:36.814239  242109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.25.2 --> /var/lib/minikube/images/kube-controller-manager_v1.25.2 (31264256 bytes)
	I0921 22:08:36.816317  242109 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep gcr.io/k8s-minikube/storage-provisioner:v5"
	I0921 22:08:36.913665  242109 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0921 22:08:36.913720  242109 cri.go:216] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0921 22:08:36.913763  242109 ssh_runner.go:195] Run: which crictl
	I0921 22:08:36.976718  242109 containerd.go:233] Loading image: /var/lib/minikube/images/coredns_v1.9.3
	I0921 22:08:36.976800  242109 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.9.3
	I0921 22:08:36.978381  242109 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0921 22:08:37.888410  242109 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.9.3 from cache
	I0921 22:08:37.888449  242109 containerd.go:233] Loading image: /var/lib/minikube/images/kube-proxy_v1.25.2
	I0921 22:08:37.888504  242109 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.25.2
	I0921 22:08:37.888508  242109 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0921 22:08:37.888581  242109 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0921 22:08:38.829468  242109 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.25.2 from cache
	I0921 22:08:38.829502  242109 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0921 22:08:38.829516  242109 containerd.go:233] Loading image: /var/lib/minikube/images/kube-scheduler_v1.25.2
	I0921 22:08:38.829527  242109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I0921 22:08:38.829554  242109 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.25.2
	I0921 22:08:39.652337  242109 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.25.2 from cache
	I0921 22:08:39.652378  242109 containerd.go:233] Loading image: /var/lib/minikube/images/kube-apiserver_v1.25.2
	I0921 22:08:39.652422  242109 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.25.2
	I0921 22:08:41.039289  242109 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.25.2: (1.386836292s)
	I0921 22:08:41.039320  242109 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.25.2 from cache
	I0921 22:08:41.039353  242109 containerd.go:233] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.25.2
	I0921 22:08:41.039389  242109 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.25.2
	I0921 22:08:42.303033  242109 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.25.2: (1.263610588s)
	I0921 22:08:42.303060  242109 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.25.2 from cache
	I0921 22:08:42.303087  242109 containerd.go:233] Loading image: /var/lib/minikube/images/etcd_3.5.4-0
	I0921 22:08:42.303129  242109 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.4-0
	I0921 22:08:46.156350  242109 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.4-0: (3.853187961s)
	I0921 22:08:46.156380  242109 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.4-0 from cache
	I0921 22:08:46.156407  242109 containerd.go:233] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0921 22:08:46.156452  242109 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I0921 22:08:46.590489  242109 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0921 22:08:46.590543  242109 cache_images.go:123] Successfully loaded all cached images
	I0921 22:08:46.590551  242109 cache_images.go:92] LoadImages completed in 10.685842974s
	I0921 22:08:46.590602  242109 ssh_runner.go:195] Run: sudo crictl info
	I0921 22:08:46.615686  242109 cni.go:95] Creating CNI manager for ""
	I0921 22:08:46.615709  242109 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0921 22:08:46.615755  242109 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0921 22:08:46.615772  242109 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.25.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-20220921220832-10174 NodeName:no-preload-20220921220832-10174 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/mini
kube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
	I0921 22:08:46.615927  242109 kubeadm.go:161] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "no-preload-20220921220832-10174"
	  kubeletExtraArgs:
	    node-ip: 192.168.94.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0921 22:08:46.616044  242109 kubeadm.go:962] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=no-preload-20220921220832-10174 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.2 ClusterName:no-preload-20220921220832-10174 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0921 22:08:46.616099  242109 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.2
	I0921 22:08:46.623273  242109 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.25.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.25.2': No such file or directory
	
	Initiating transfer...
	I0921 22:08:46.623321  242109 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.25.2
	I0921 22:08:46.630033  242109 binary.go:76] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.25.2/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.25.2/bin/linux/amd64/kubectl.sha256
	I0921 22:08:46.630057  242109 binary.go:76] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.25.2/bin/linux/amd64/kubelet?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.25.2/bin/linux/amd64/kubelet.sha256
	I0921 22:08:46.630073  242109 binary.go:76] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.25.2/bin/linux/amd64/kubeadm?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.25.2/bin/linux/amd64/kubeadm.sha256
	I0921 22:08:46.630100  242109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0921 22:08:46.630114  242109 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.25.2/kubectl
	I0921 22:08:46.630154  242109 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.25.2/kubeadm
	I0921 22:08:46.633651  242109 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.25.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.25.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/binaries/v1.25.2/kubectl': No such file or directory
	I0921 22:08:46.633687  242109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/linux/amd64/v1.25.2/kubectl --> /var/lib/minikube/binaries/v1.25.2/kubectl (45015040 bytes)
	I0921 22:08:46.641795  242109 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.25.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.25.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/binaries/v1.25.2/kubeadm': No such file or directory
	I0921 22:08:46.641822  242109 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.25.2/kubelet
	I0921 22:08:46.641822  242109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/linux/amd64/v1.25.2/kubeadm --> /var/lib/minikube/binaries/v1.25.2/kubeadm (43798528 bytes)
	I0921 22:08:46.657920  242109 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.25.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.25.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/binaries/v1.25.2/kubelet': No such file or directory
	I0921 22:08:46.657958  242109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/linux/amd64/v1.25.2/kubelet --> /var/lib/minikube/binaries/v1.25.2/kubelet (114229208 bytes)
	I0921 22:08:47.042335  242109 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0921 22:08:47.049185  242109 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (524 bytes)
	I0921 22:08:47.064259  242109 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0921 22:08:47.077743  242109 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2060 bytes)
	I0921 22:08:47.091446  242109 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0921 22:08:47.094777  242109 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0921 22:08:47.104435  242109 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/no-preload-20220921220832-10174 for IP: 192.168.94.2
	I0921 22:08:47.104536  242109 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.key
	I0921 22:08:47.104571  242109 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/proxy-client-ca.key
	I0921 22:08:47.104617  242109 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/no-preload-20220921220832-10174/client.key
	I0921 22:08:47.104631  242109 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/no-preload-20220921220832-10174/client.crt with IP's: []
	I0921 22:08:47.322756  242109 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/no-preload-20220921220832-10174/client.crt ...
	I0921 22:08:47.322786  242109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/no-preload-20220921220832-10174/client.crt: {Name:mk85591f6c78ee9c1b821877f8a8e1ba8c002ea0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:08:47.322985  242109 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/no-preload-20220921220832-10174/client.key ...
	I0921 22:08:47.323000  242109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/no-preload-20220921220832-10174/client.key: {Name:mkd74bb0553ae0b96fa9591e0ef94fcbd07d1fca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:08:47.323087  242109 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/no-preload-20220921220832-10174/apiserver.key.ad8e880a
	I0921 22:08:47.323102  242109 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/no-preload-20220921220832-10174/apiserver.crt.ad8e880a with IP's: [192.168.94.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0921 22:08:47.483755  242109 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/no-preload-20220921220832-10174/apiserver.crt.ad8e880a ...
	I0921 22:08:47.483790  242109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/no-preload-20220921220832-10174/apiserver.crt.ad8e880a: {Name:mk48e4b74038505c40285e03d6ebaeb0f1a7facc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:08:47.484008  242109 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/no-preload-20220921220832-10174/apiserver.key.ad8e880a ...
	I0921 22:08:47.484027  242109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/no-preload-20220921220832-10174/apiserver.key.ad8e880a: {Name:mk3e8ff442c58e1eb897e504d0c2602cf9404be8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:08:47.484121  242109 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/no-preload-20220921220832-10174/apiserver.crt.ad8e880a -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/no-preload-20220921220832-10174/apiserver.crt
	I0921 22:08:47.484181  242109 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/no-preload-20220921220832-10174/apiserver.key.ad8e880a -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/no-preload-20220921220832-10174/apiserver.key
	I0921 22:08:47.484233  242109 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/no-preload-20220921220832-10174/proxy-client.key
	I0921 22:08:47.484249  242109 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/no-preload-20220921220832-10174/proxy-client.crt with IP's: []
	I0921 22:08:47.723751  242109 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/no-preload-20220921220832-10174/proxy-client.crt ...
	I0921 22:08:47.723784  242109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/no-preload-20220921220832-10174/proxy-client.crt: {Name:mk03b5ee8cea1d4f283d674c427e7d33342a4be8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:08:47.723972  242109 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/no-preload-20220921220832-10174/proxy-client.key ...
	I0921 22:08:47.723984  242109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/no-preload-20220921220832-10174/proxy-client.key: {Name:mkf65de18c4cb3a81c8c54e3c1c9e9fc7b6259b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:08:47.724155  242109 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/10174.pem (1338 bytes)
	W0921 22:08:47.724197  242109 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/10174_empty.pem, impossibly tiny 0 bytes
	I0921 22:08:47.724217  242109 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca-key.pem (1679 bytes)
	I0921 22:08:47.724246  242109 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem (1078 bytes)
	I0921 22:08:47.724271  242109 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/cert.pem (1123 bytes)
	I0921 22:08:47.724296  242109 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/key.pem (1675 bytes)
	I0921 22:08:47.724334  242109 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/101742.pem (1708 bytes)
	I0921 22:08:47.724847  242109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/no-preload-20220921220832-10174/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0921 22:08:47.743210  242109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/no-preload-20220921220832-10174/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0921 22:08:47.760616  242109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/no-preload-20220921220832-10174/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0921 22:08:47.777454  242109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/no-preload-20220921220832-10174/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0921 22:08:47.795008  242109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0921 22:08:47.813029  242109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0921 22:08:47.830326  242109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0921 22:08:47.846904  242109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0921 22:08:47.863979  242109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0921 22:08:47.881065  242109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/10174.pem --> /usr/share/ca-certificates/10174.pem (1338 bytes)
	I0921 22:08:47.897965  242109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/101742.pem --> /usr/share/ca-certificates/101742.pem (1708 bytes)
	I0921 22:08:47.914592  242109 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0921 22:08:47.927516  242109 ssh_runner.go:195] Run: openssl version
	I0921 22:08:47.932439  242109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0921 22:08:47.939571  242109 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0921 22:08:47.942739  242109 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Sep 21 21:28 /usr/share/ca-certificates/minikubeCA.pem
	I0921 22:08:47.942793  242109 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0921 22:08:47.947595  242109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0921 22:08:47.954844  242109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10174.pem && ln -fs /usr/share/ca-certificates/10174.pem /etc/ssl/certs/10174.pem"
	I0921 22:08:47.962246  242109 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10174.pem
	I0921 22:08:47.965549  242109 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Sep 21 21:32 /usr/share/ca-certificates/10174.pem
	I0921 22:08:47.965589  242109 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10174.pem
	I0921 22:08:47.970478  242109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10174.pem /etc/ssl/certs/51391683.0"
	I0921 22:08:47.977766  242109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/101742.pem && ln -fs /usr/share/ca-certificates/101742.pem /etc/ssl/certs/101742.pem"
	I0921 22:08:47.985200  242109 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/101742.pem
	I0921 22:08:47.988442  242109 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Sep 21 21:32 /usr/share/ca-certificates/101742.pem
	I0921 22:08:47.988488  242109 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/101742.pem
	I0921 22:08:47.993172  242109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/101742.pem /etc/ssl/certs/3ec20f2e.0"
	I0921 22:08:48.000573  242109 kubeadm.go:396] StartCluster: {Name:no-preload-20220921220832-10174 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:no-preload-20220921220832-10174 Namespace:default APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 22:08:48.000677  242109 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0921 22:08:48.000726  242109 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0921 22:08:48.025134  242109 cri.go:87] found id: ""
	I0921 22:08:48.025190  242109 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0921 22:08:48.032261  242109 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0921 22:08:48.039171  242109 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0921 22:08:48.039231  242109 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0921 22:08:48.046214  242109 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0921 22:08:48.046299  242109 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0921 22:08:48.088140  242109 kubeadm.go:317] [init] Using Kubernetes version: v1.25.2
	I0921 22:08:48.088209  242109 kubeadm.go:317] [preflight] Running pre-flight checks
	I0921 22:08:48.117792  242109 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I0921 22:08:48.117878  242109 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1017-gcp
	I0921 22:08:48.117923  242109 kubeadm.go:317] OS: Linux
	I0921 22:08:48.117984  242109 kubeadm.go:317] CGROUPS_CPU: enabled
	I0921 22:08:48.118081  242109 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I0921 22:08:48.118147  242109 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I0921 22:08:48.118219  242109 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I0921 22:08:48.118316  242109 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I0921 22:08:48.118385  242109 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I0921 22:08:48.118447  242109 kubeadm.go:317] CGROUPS_PIDS: enabled
	I0921 22:08:48.118555  242109 kubeadm.go:317] CGROUPS_HUGETLB: enabled
	I0921 22:08:48.118644  242109 kubeadm.go:317] CGROUPS_BLKIO: enabled
	I0921 22:08:48.180626  242109 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0921 22:08:48.180773  242109 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0921 22:08:48.180889  242109 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0921 22:08:48.297089  242109 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0921 22:08:48.299973  242109 out.go:204]   - Generating certificates and keys ...
	I0921 22:08:48.300084  242109 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0921 22:08:48.300159  242109 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0921 22:08:48.345407  242109 kubeadm.go:317] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0921 22:08:48.412890  242109 kubeadm.go:317] [certs] Generating "front-proxy-ca" certificate and key
	I0921 22:08:48.459640  242109 kubeadm.go:317] [certs] Generating "front-proxy-client" certificate and key
	I0921 22:08:48.537047  242109 kubeadm.go:317] [certs] Generating "etcd/ca" certificate and key
	I0921 22:08:48.688929  242109 kubeadm.go:317] [certs] Generating "etcd/server" certificate and key
	I0921 22:08:48.689106  242109 kubeadm.go:317] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-20220921220832-10174] and IPs [192.168.94.2 127.0.0.1 ::1]
	I0921 22:08:48.857202  242109 kubeadm.go:317] [certs] Generating "etcd/peer" certificate and key
	I0921 22:08:48.857367  242109 kubeadm.go:317] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-20220921220832-10174] and IPs [192.168.94.2 127.0.0.1 ::1]
	I0921 22:08:49.098125  242109 kubeadm.go:317] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0921 22:08:49.259620  242109 kubeadm.go:317] [certs] Generating "apiserver-etcd-client" certificate and key
	I0921 22:08:49.346098  242109 kubeadm.go:317] [certs] Generating "sa" key and public key
	I0921 22:08:49.346223  242109 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0921 22:08:49.494334  242109 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0921 22:08:49.729704  242109 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0921 22:08:49.888182  242109 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0921 22:08:50.100841  242109 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0921 22:08:50.112065  242109 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0921 22:08:50.112909  242109 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0921 22:08:50.112969  242109 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I0921 22:08:50.192663  242109 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0921 22:08:50.195940  242109 out.go:204]   - Booting up control plane ...
	I0921 22:08:50.196092  242109 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0921 22:08:50.197016  242109 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0921 22:08:50.197843  242109 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0921 22:08:50.198576  242109 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0921 22:08:50.200535  242109 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0921 22:08:56.703357  242109 kubeadm.go:317] [apiclient] All control plane components are healthy after 6.502776 seconds
	I0921 22:08:56.703469  242109 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0921 22:08:56.711328  242109 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0921 22:08:57.227288  242109 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs
	I0921 22:08:57.227573  242109 kubeadm.go:317] [mark-control-plane] Marking the node no-preload-20220921220832-10174 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0921 22:08:57.736874  242109 kubeadm.go:317] [bootstrap-token] Using token: uutotp.tqwybgup8rypvhi1
	I0921 22:08:57.738394  242109 out.go:204]   - Configuring RBAC rules ...
	I0921 22:08:57.738514  242109 kubeadm.go:317] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0921 22:08:57.741343  242109 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0921 22:08:57.745866  242109 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0921 22:08:57.747957  242109 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0921 22:08:57.749935  242109 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0921 22:08:57.751680  242109 kubeadm.go:317] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0921 22:08:57.758707  242109 kubeadm.go:317] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0921 22:08:57.969367  242109 kubeadm.go:317] [addons] Applied essential addon: CoreDNS
	I0921 22:08:58.180290  242109 kubeadm.go:317] [addons] Applied essential addon: kube-proxy
	I0921 22:08:58.181366  242109 kubeadm.go:317] 
	I0921 22:08:58.181465  242109 kubeadm.go:317] Your Kubernetes control-plane has initialized successfully!
	I0921 22:08:58.181486  242109 kubeadm.go:317] 
	I0921 22:08:58.181560  242109 kubeadm.go:317] To start using your cluster, you need to run the following as a regular user:
	I0921 22:08:58.181569  242109 kubeadm.go:317] 
	I0921 22:08:58.181589  242109 kubeadm.go:317]   mkdir -p $HOME/.kube
	I0921 22:08:58.181650  242109 kubeadm.go:317]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0921 22:08:58.181740  242109 kubeadm.go:317]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0921 22:08:58.181760  242109 kubeadm.go:317] 
	I0921 22:08:58.181832  242109 kubeadm.go:317] Alternatively, if you are the root user, you can run:
	I0921 22:08:58.181845  242109 kubeadm.go:317] 
	I0921 22:08:58.181920  242109 kubeadm.go:317]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0921 22:08:58.181934  242109 kubeadm.go:317] 
	I0921 22:08:58.181980  242109 kubeadm.go:317] You should now deploy a pod network to the cluster.
	I0921 22:08:58.182064  242109 kubeadm.go:317] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0921 22:08:58.182156  242109 kubeadm.go:317]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0921 22:08:58.182168  242109 kubeadm.go:317] 
	I0921 22:08:58.182279  242109 kubeadm.go:317] You can now join any number of control-plane nodes by copying certificate authorities
	I0921 22:08:58.182379  242109 kubeadm.go:317] and service account keys on each node and then running the following as root:
	I0921 22:08:58.182392  242109 kubeadm.go:317] 
	I0921 22:08:58.182481  242109 kubeadm.go:317]   kubeadm join control-plane.minikube.internal:8443 --token uutotp.tqwybgup8rypvhi1 \
	I0921 22:08:58.182570  242109 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:b419e756666ce2e30bdac31039f21ad7242d26e2b9c03a55c5bc6fe8dcfddcf7 \
	I0921 22:08:58.182608  242109 kubeadm.go:317] 	--control-plane 
	I0921 22:08:58.182621  242109 kubeadm.go:317] 
	I0921 22:08:58.182729  242109 kubeadm.go:317] Then you can join any number of worker nodes by running the following on each as root:
	I0921 22:08:58.182743  242109 kubeadm.go:317] 
	I0921 22:08:58.182860  242109 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8443 --token uutotp.tqwybgup8rypvhi1 \
	I0921 22:08:58.182966  242109 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:b419e756666ce2e30bdac31039f21ad7242d26e2b9c03a55c5bc6fe8dcfddcf7 
	I0921 22:08:58.184610  242109 kubeadm.go:317] W0921 22:08:48.080443    1165 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0921 22:08:58.185021  242109 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1017-gcp\n", err: exit status 1
	I0921 22:08:58.185195  242109 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0921 22:08:58.185219  242109 cni.go:95] Creating CNI manager for ""
	I0921 22:08:58.185230  242109 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0921 22:08:58.187251  242109 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0921 22:08:58.188674  242109 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0921 22:08:58.193540  242109 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.2/kubectl ...
	I0921 22:08:58.193563  242109 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0921 22:08:58.207619  242109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0921 22:08:58.977362  242109 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0921 22:08:58.977533  242109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:08:58.977535  242109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl label nodes minikube.k8s.io/version=v1.27.0 minikube.k8s.io/commit=937c68716dfaac5b5ffa3b6655158d5d3472b8c4 minikube.k8s.io/name=no-preload-20220921220832-10174 minikube.k8s.io/updated_at=2022_09_21T22_08_58_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:08:58.984616  242109 ops.go:34] apiserver oom_adj: -16
	I0921 22:08:59.085839  242109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:08:59.647822  242109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:09:00.147844  242109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:09:00.647817  242109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:09:01.148095  242109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:09:01.647490  242109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:09:02.147631  242109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:09:02.647704  242109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:09:03.147462  242109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:09:03.647170  242109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:09:04.148196  242109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:09:04.647896  242109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:09:05.147797  242109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:09:05.647835  242109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:09:06.147616  242109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:09:06.648057  242109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:09:07.148138  242109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:09:07.648259  242109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:09:08.147324  242109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:09:08.647825  242109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:09:09.147235  242109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:09:09.647560  242109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:09:10.148226  242109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:09:10.647618  242109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:09:11.148232  242109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:09:11.443591  242109 kubeadm.go:1067] duration metric: took 12.466122573s to wait for elevateKubeSystemPrivileges.
	I0921 22:09:11.443629  242109 kubeadm.go:398] StartCluster complete in 23.443059645s
	I0921 22:09:11.443651  242109 settings.go:142] acquiring lock: {Name:mk2e017b0c75e33bad2b3a546d00214b38a21694 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:09:11.443796  242109 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
	I0921 22:09:11.445698  242109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig: {Name:mkdec5faf1bb182b50a3cd5458b352f38075dc15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0921 22:09:12.105891  242109 kapi.go:233] failed rescaling deployment, will retry: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I0921 22:09:13.224528  242109 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "no-preload-20220921220832-10174" rescaled to 1
	I0921 22:09:13.224598  242109 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0921 22:09:13.226122  242109 out.go:177] * Verifying Kubernetes components...
	I0921 22:09:13.224654  242109 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0921 22:09:13.224663  242109 addons.go:412] enableAddons start: toEnable=map[], additional=[]
	I0921 22:09:13.224812  242109 config.go:180] Loaded profile config "no-preload-20220921220832-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.2
	I0921 22:09:13.227415  242109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0921 22:09:13.227468  242109 addons.go:65] Setting storage-provisioner=true in profile "no-preload-20220921220832-10174"
	I0921 22:09:13.227491  242109 addons.go:153] Setting addon storage-provisioner=true in "no-preload-20220921220832-10174"
	W0921 22:09:13.227496  242109 addons.go:162] addon storage-provisioner should already be in state true
	I0921 22:09:13.227470  242109 addons.go:65] Setting default-storageclass=true in profile "no-preload-20220921220832-10174"
	I0921 22:09:13.227573  242109 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-20220921220832-10174"
	I0921 22:09:13.227538  242109 host.go:66] Checking if "no-preload-20220921220832-10174" exists ...
	I0921 22:09:13.228056  242109 cli_runner.go:164] Run: docker container inspect no-preload-20220921220832-10174 --format={{.State.Status}}
	I0921 22:09:13.228219  242109 cli_runner.go:164] Run: docker container inspect no-preload-20220921220832-10174 --format={{.State.Status}}
	I0921 22:09:13.263194  242109 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0921 22:09:13.276187  242109 addons.go:153] Setting addon default-storageclass=true in "no-preload-20220921220832-10174"
	W0921 22:09:13.277014  242109 addons.go:162] addon default-storageclass should already be in state true
	I0921 22:09:13.277169  242109 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0921 22:09:13.277190  242109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0921 22:09:13.277252  242109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220832-10174
	I0921 22:09:13.277322  242109 host.go:66] Checking if "no-preload-20220921220832-10174" exists ...
	I0921 22:09:13.277887  242109 cli_runner.go:164] Run: docker container inspect no-preload-20220921220832-10174 --format={{.State.Status}}
	I0921 22:09:13.312648  242109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49408 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/no-preload-20220921220832-10174/id_rsa Username:docker}
	I0921 22:09:13.317631  242109 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0921 22:09:13.317656  242109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0921 22:09:13.317711  242109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220832-10174
	I0921 22:09:13.328590  242109 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0921 22:09:13.330064  242109 node_ready.go:35] waiting up to 6m0s for node "no-preload-20220921220832-10174" to be "Ready" ...
	I0921 22:09:13.347171  242109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49408 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/no-preload-20220921220832-10174/id_rsa Username:docker}
	I0921 22:09:13.489796  242109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0921 22:09:13.493096  242109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0921 22:09:13.790286  242109 start.go:810] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS
	I0921 22:09:13.930600  242109 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0921 22:09:13.931786  242109 addons.go:414] enableAddons completed in 707.12505ms
	I0921 22:09:15.336819  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:09:17.337047  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:09:19.836911  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:09:22.336956  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:09:24.837159  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:09:27.337047  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:09:29.337656  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:09:31.836215  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:09:34.337094  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:09:36.837016  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:09:39.337000  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:09:41.836710  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:09:43.837182  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:09:46.336639  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:09:48.836538  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:09:50.836773  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:09:52.836834  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:09:55.337198  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:09:57.837037  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:10:00.336264  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:10:02.836640  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:10:04.836962  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:10:07.337093  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:10:09.836993  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:10:12.336973  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:10:14.836797  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:10:17.336986  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:10:19.337232  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:10:21.836822  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:10:24.336556  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:10:26.336818  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:10:28.835935  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:10:30.837027  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:10:33.337081  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:10:35.836074  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:10:37.836900  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:10:40.336769  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:10:42.337141  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:10:44.836272  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:10:46.836519  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:10:48.836951  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:10:51.337176  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:10:53.836332  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:10:55.836776  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:10:58.336765  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:11:00.336875  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:11:02.337279  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:11:04.836749  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:11:06.837109  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:11:09.336292  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:11:11.836676  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:11:13.837294  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:11:16.336908  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:11:18.837224  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:11:21.337131  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:11:23.836773  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:11:25.837620  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:11:28.336967  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:11:30.337024  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:11:32.337947  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:11:34.836321  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:11:36.837370  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:11:39.337094  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:11:41.836184  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:11:43.836287  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:11:45.837347  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:11:48.335973  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:11:50.336273  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:11:52.337131  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:11:54.836927  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:11:57.336103  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:11:59.336841  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:01.836213  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:03.836938  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:06.336349  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:08.837257  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:11.336377  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:13.837027  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:16.336212  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:18.837013  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:21.336931  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:23.836124  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:25.836792  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:28.337008  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:30.836665  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:32.836819  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:35.336220  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:37.336820  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:39.837041  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:42.336354  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:44.836264  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:46.836827  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:49.336608  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:51.336789  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:53.836292  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:55.836687  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:57.836753  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:59.836812  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:13:02.336234  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:13:04.337102  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:13:06.836799  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:13:08.836960  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:13:11.336661  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:13:13.338612  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:13:13.338639  242109 node_ready.go:38] duration metric: took 4m0.008551222s waiting for node "no-preload-20220921220832-10174" to be "Ready" ...
	I0921 22:13:13.340854  242109 out.go:177] 
	W0921 22:13:13.342210  242109 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0921 22:13:13.342226  242109 out.go:239] * 
	* 
	W0921 22:13:13.342954  242109 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0921 22:13:13.344170  242109 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p no-preload-20220921220832-10174 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220921220832-10174
helpers_test.go:235: (dbg) docker inspect no-preload-20220921220832-10174:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d6359e799a3f8bf5c7871e4f7257a82e60fa936a5d415f8dcf227027153b841e",
	        "Created": "2022-09-21T22:08:33.259074855Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 242679,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-09-21T22:08:33.608689229Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f58fddaff4349397c9f51a6b73926a9b118af22b4ccb4492e84c74d0b59dcd4",
	        "ResolvConfPath": "/var/lib/docker/containers/d6359e799a3f8bf5c7871e4f7257a82e60fa936a5d415f8dcf227027153b841e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d6359e799a3f8bf5c7871e4f7257a82e60fa936a5d415f8dcf227027153b841e/hostname",
	        "HostsPath": "/var/lib/docker/containers/d6359e799a3f8bf5c7871e4f7257a82e60fa936a5d415f8dcf227027153b841e/hosts",
	        "LogPath": "/var/lib/docker/containers/d6359e799a3f8bf5c7871e4f7257a82e60fa936a5d415f8dcf227027153b841e/d6359e799a3f8bf5c7871e4f7257a82e60fa936a5d415f8dcf227027153b841e-json.log",
	        "Name": "/no-preload-20220921220832-10174",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-20220921220832-10174:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-20220921220832-10174",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/0d97efec5bb72252d948800969bf3c9c07c6335302b779ac376f906a108bc7bf-init/diff:/var/lib/docker/overlay2/e464f9a636cb97832b8aa5685163afb26b14d037782877c3831e0f261b6fb12b/diff:/var/lib/docker/overlay2/6f2f230003f0711dfcff3390931e14f9abd39be1cd2b674079cb6a0fc99dab7f/diff:/var/lib/docker/overlay2/a539506727a1471a83b57694b6923a137699d9d47fe00601ffa27c34d63d17c2/diff:/var/lib/docker/overlay2/7dc48fda616494e9959c05191521a1f5bdf6be0555a35fc1d7ce85b87a0522ad/diff:/var/lib/docker/overlay2/6a1c06da85f8c03c9712693ca8a75d804e7b9f135a6b92e3929385cc791b0b3a/diff:/var/lib/docker/overlay2/d178ee6cf0c027c7943422dc71b38eb93ee131daede423f6cf283424eaebc517/diff:/var/lib/docker/overlay2/f4c8d81a89934a29db99d9ed4b4542a8882465ab2b6419ba4a3fe09214d25d9e/diff:/var/lib/docker/overlay2/4411575e37772974371f716b7c4abb3053138ddcaf53d883eb3116b4c3a37f78/diff:/var/lib/docker/overlay2/a3f8db0c0d790efcede2f41e470d293e280b022ff0bbb46c8bc25f190e3146fc/diff:/var/lib/docker/overlay2/a5bfd3
703378d6d5b2622598a13b4b2eb9203659c1ffff9aa93944ef8d20d22c/diff:/var/lib/docker/overlay2/3c24c9c8a37c1a488d12209df991b3857ccbc448a3329e46a8d48fc28ef81b83/diff:/var/lib/docker/overlay2/199771963b33bc809e912a6d428c556897f0ba04db2268e08695cb7ce0ee3bad/diff:/var/lib/docker/overlay2/4bc13570e456a5d261fbaf68140f2e5b2ea10c6683623858e16ee3ed1b117ef7/diff:/var/lib/docker/overlay2/9831fdfd44eff7e3f4780d10f5480f090b388735bb188e7a1e93251b7769d8f3/diff:/var/lib/docker/overlay2/28126fe31ddfbf99d4588c5ac750503b9b6cee8f2e00781941cff5f4323d1c76/diff:/var/lib/docker/overlay2/173221e8ff5f6ec22870429dc8bb01272ade638fe47275d1daa1ce07952c8c8c/diff:/var/lib/docker/overlay2/53bc2e4fa25c7c694765cf4d57e59552b26eed732ba59b8d74b221ea0ada7479/diff:/var/lib/docker/overlay2/0175a1cb7e1c92247b6cbac270da49d811d3508e193c1c2e47bef8cb769a47bf/diff:/var/lib/docker/overlay2/1651afeb98cf83af553f3e0a4dcb564af84cb440147702fddc0133cdae08e879/diff:/var/lib/docker/overlay2/64c0563be8f8902fe0e9d207f839be98da0e23d6d839403a7e5498f0468e6b75/diff:/var/lib/d
ocker/overlay2/c9ce1f7c22e681e0e279ce7656144b4c75e2d24f7297acad69e621f2756b75e8/diff:/var/lib/docker/overlay2/469e6b8bbc078383af3dc64c254906774ab56a833c4de1dd39b46bfa27fee3b0/diff:/var/lib/docker/overlay2/797536b923ada4261904843a51188063c401089eae5fec971f1dfb3c21d000a8/diff:/var/lib/docker/overlay2/9d5cbab97d75347795fc5814806362514e12ffc998b48411ecf1d23f980badf6/diff:/var/lib/docker/overlay2/8681f41e3df379cfdc8fd52b2e5837512b8e36a64c87fb159548a876d16e691d/diff:/var/lib/docker/overlay2/78ea1917a0aca968f65d796cbc2ee95d7d190a700496b9e47b29884fe13b1bec/diff:/var/lib/docker/overlay2/c3c231a74b7992f1fdc04b623c10d7791eab4fd97c803ed2610d595097c682c2/diff:/var/lib/docker/overlay2/f7b38a2a87d3958318f3a4b244e33d8096cd001a8fcb916eec022b0679382900/diff:/var/lib/docker/overlay2/1107be3397c1dfc460decaf307982d427cc431beb7938fb0e78f4f7cbc0afd3b/diff:/var/lib/docker/overlay2/59319d0c4cc381deff6e51ba1cc718b7cfd4fc557e74b8e83bd228afca501c8c/diff:/var/lib/docker/overlay2/9d031408a1ab9825246b7bba81db2596cb92e70fbedc70fba9be1d25320
8529f/diff:/var/lib/docker/overlay2/5800b717c814431baf3cc17520b099e7e011915cdeb46ed6360ec2f831a926ec/diff:/var/lib/docker/overlay2/6660d8dfc6dbb7f41f871ff7ec7e0fdfdb2ca3480141cdd4cce92869cb5382bf/diff:/var/lib/docker/overlay2/525a566b79110ff18e78880f2652b8d9cfcfbdbf3272fedb650933fcbbf869bd/diff:/var/lib/docker/overlay2/a0730aa8e0952fc661048d3a7f986ff246a5b2a29ad4e8195970bb407e3cecfc/diff:/var/lib/docker/overlay2/f9d25765ae914f5f013b5f46292f03c39d82f675d2102904f5fabecf07189337/diff:/var/lib/docker/overlay2/b7b34d126964ff60bc0fb625da7343e53e8bb4e77d5e72e33257c28b3d395ca3/diff:/var/lib/docker/overlay2/8514d6cc3775ebdd86f683a78503419bc97346c0059ac0d48563d2edec9727f0/diff:/var/lib/docker/overlay2/dd6f7bead5718d42040b4ee90ce09fa15720d04f3c58e2964b1d5b241bf5b7ed/diff:/var/lib/docker/overlay2/1a8adeb0355a42e9284e241b527930f4f6b7108d459c10550d5d0c4451f9e924/diff:/var/lib/docker/overlay2/703991449e0070d93945e4435da56144ce54e461e877e0b084144663d46b10f6/diff:/var/lib/docker/overlay2/65855711cfa445d374537e6268fd1bea2f62ea
bf2f590842933254b4755050dd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0d97efec5bb72252d948800969bf3c9c07c6335302b779ac376f906a108bc7bf/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0d97efec5bb72252d948800969bf3c9c07c6335302b779ac376f906a108bc7bf/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0d97efec5bb72252d948800969bf3c9c07c6335302b779ac376f906a108bc7bf/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-20220921220832-10174",
	                "Source": "/var/lib/docker/volumes/no-preload-20220921220832-10174/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-20220921220832-10174",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-20220921220832-10174",
	                "name.minikube.sigs.k8s.io": "no-preload-20220921220832-10174",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "29f3429c3eccb420d534b5769179f5361b8b68686659e922bbb6d167cf1b0160",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49408"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49407"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49404"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49406"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49405"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/29f3429c3ecc",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-20220921220832-10174": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "d6359e799a3f",
	                        "no-preload-20220921220832-10174"
	                    ],
	                    "NetworkID": "40cb175bb75cdb2ff8ee942229fbc7e22e0ed7651da5bae77cd3dd1e2f70c5e3",
	                    "EndpointID": "3a727e68b6a78ddeed89a7d40cdef360d206e4656d04dab25ad21e8976c86ff4",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:5e:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220921220832-10174 -n no-preload-20220921220832-10174
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-20220921220832-10174 logs -n 25
helpers_test.go:252: TestStartStop/group/no-preload/serial/FirstStart logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |                     Profile                     |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p cilium-20220921215524-10174                    | cilium-20220921215524-10174                     | jenkins | v1.27.0 | 21 Sep 22 21:59 UTC | 21 Sep 22 22:01 UTC |
	|         | --memory=2048                                     |                                                 |         |         |                     |                     |
	|         | --alsologtostderr                                 |                                                 |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                                                 |         |         |                     |                     |
	|         | --cni=cilium --driver=docker                      |                                                 |         |         |                     |                     |
	|         | --container-runtime=containerd                    |                                                 |         |         |                     |                     |
	| ssh     | -p auto-20220921215523-10174                      | auto-20220921215523-10174                       | jenkins | v1.27.0 | 21 Sep 22 21:59 UTC | 21 Sep 22 21:59 UTC |
	|         | pgrep -a kubelet                                  |                                                 |         |         |                     |                     |
	| delete  | -p auto-20220921215523-10174                      | auto-20220921215523-10174                       | jenkins | v1.27.0 | 21 Sep 22 21:59 UTC | 21 Sep 22 21:59 UTC |
	| start   | -p calico-20220921215524-10174                    | calico-20220921215524-10174                     | jenkins | v1.27.0 | 21 Sep 22 21:59 UTC |                     |
	|         | --memory=2048                                     |                                                 |         |         |                     |                     |
	|         | --alsologtostderr                                 |                                                 |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                                                 |         |         |                     |                     |
	|         | --cni=calico --driver=docker                      |                                                 |         |         |                     |                     |
	|         | --container-runtime=containerd                    |                                                 |         |         |                     |                     |
	| ssh     | -p                                                | kindnet-20220921215523-10174                    | jenkins | v1.27.0 | 21 Sep 22 21:59 UTC | 21 Sep 22 21:59 UTC |
	|         | kindnet-20220921215523-10174                      |                                                 |         |         |                     |                     |
	|         | pgrep -a kubelet                                  |                                                 |         |         |                     |                     |
	| delete  | -p                                                | kindnet-20220921215523-10174                    | jenkins | v1.27.0 | 21 Sep 22 21:59 UTC | 21 Sep 22 21:59 UTC |
	|         | kindnet-20220921215523-10174                      |                                                 |         |         |                     |                     |
	| start   | -p                                                | enable-default-cni-20220921215523-10174         | jenkins | v1.27.0 | 21 Sep 22 21:59 UTC | 21 Sep 22 22:04 UTC |
	|         | enable-default-cni-20220921215523-10174           |                                                 |         |         |                     |                     |
	|         | --memory=2048 --alsologtostderr                   |                                                 |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                                                 |         |         |                     |                     |
	|         | --enable-default-cni=true                         |                                                 |         |         |                     |                     |
	|         | --driver=docker                                   |                                                 |         |         |                     |                     |
	|         | --container-runtime=containerd                    |                                                 |         |         |                     |                     |
	| ssh     | -p cilium-20220921215524-10174                    | cilium-20220921215524-10174                     | jenkins | v1.27.0 | 21 Sep 22 22:01 UTC | 21 Sep 22 22:01 UTC |
	|         | pgrep -a kubelet                                  |                                                 |         |         |                     |                     |
	| delete  | -p cilium-20220921215524-10174                    | cilium-20220921215524-10174                     | jenkins | v1.27.0 | 21 Sep 22 22:01 UTC | 21 Sep 22 22:01 UTC |
	| start   | -p bridge-20220921215523-10174                    | bridge-20220921215523-10174                     | jenkins | v1.27.0 | 21 Sep 22 22:01 UTC | 21 Sep 22 22:01 UTC |
	|         | --memory=2048                                     |                                                 |         |         |                     |                     |
	|         | --alsologtostderr                                 |                                                 |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                                                 |         |         |                     |                     |
	|         | --cni=bridge --driver=docker                      |                                                 |         |         |                     |                     |
	|         | --container-runtime=containerd                    |                                                 |         |         |                     |                     |
	| ssh     | -p bridge-20220921215523-10174                    | bridge-20220921215523-10174                     | jenkins | v1.27.0 | 21 Sep 22 22:01 UTC | 21 Sep 22 22:01 UTC |
	|         | pgrep -a kubelet                                  |                                                 |         |         |                     |                     |
	| delete  | -p                                                | kubernetes-upgrade-20220921215522-10174         | jenkins | v1.27.0 | 21 Sep 22 22:04 UTC | 21 Sep 22 22:04 UTC |
	|         | kubernetes-upgrade-20220921215522-10174           |                                                 |         |         |                     |                     |
	| start   | -p                                                | embed-certs-20220921220439-10174                | jenkins | v1.27.0 | 21 Sep 22 22:04 UTC |                     |
	|         | embed-certs-20220921220439-10174                  |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |         |                     |                     |
	|         | --wait=true --embed-certs                         |                                                 |         |         |                     |                     |
	|         | --driver=docker                                   |                                                 |         |         |                     |                     |
	|         | --container-runtime=containerd                    |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.2                      |                                                 |         |         |                     |                     |
	| ssh     | -p                                                | enable-default-cni-20220921215523-10174         | jenkins | v1.27.0 | 21 Sep 22 22:04 UTC | 21 Sep 22 22:04 UTC |
	|         | enable-default-cni-20220921215523-10174           |                                                 |         |         |                     |                     |
	|         | pgrep -a kubelet                                  |                                                 |         |         |                     |                     |
	| delete  | -p bridge-20220921215523-10174                    | bridge-20220921215523-10174                     | jenkins | v1.27.0 | 21 Sep 22 22:07 UTC | 21 Sep 22 22:07 UTC |
	| start   | -p                                                | old-k8s-version-20220921220722-10174            | jenkins | v1.27.0 | 21 Sep 22 22:07 UTC | 21 Sep 22 22:09 UTC |
	|         | old-k8s-version-20220921220722-10174              |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                 |                                                 |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                                                 |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                                                 |         |         |                     |                     |
	|         | --keep-context=false --driver=docker              |                                                 |         |         |                     |                     |
	|         |  --container-runtime=containerd                   |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                      |                                                 |         |         |                     |                     |
	| delete  | -p calico-20220921215524-10174                    | calico-20220921215524-10174                     | jenkins | v1.27.0 | 21 Sep 22 22:08 UTC | 21 Sep 22 22:08 UTC |
	| delete  | -p                                                | disable-driver-mounts-20220921220831-10174      | jenkins | v1.27.0 | 21 Sep 22 22:08 UTC | 21 Sep 22 22:08 UTC |
	|         | disable-driver-mounts-20220921220831-10174        |                                                 |         |         |                     |                     |
	| start   | -p                                                | no-preload-20220921220832-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:08 UTC |                     |
	|         | no-preload-20220921220832-10174                   |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                                                 |         |         |                     |                     |
	|         | --driver=docker                                   |                                                 |         |         |                     |                     |
	|         | --container-runtime=containerd                    |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.2                      |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | old-k8s-version-20220921220722-10174            | jenkins | v1.27.0 | 21 Sep 22 22:09 UTC | 21 Sep 22 22:09 UTC |
	|         | old-k8s-version-20220921220722-10174              |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                                 |         |         |                     |                     |
	| stop    | -p                                                | old-k8s-version-20220921220722-10174            | jenkins | v1.27.0 | 21 Sep 22 22:09 UTC | 21 Sep 22 22:09 UTC |
	|         | old-k8s-version-20220921220722-10174              |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                               | old-k8s-version-20220921220722-10174            | jenkins | v1.27.0 | 21 Sep 22 22:09 UTC | 21 Sep 22 22:09 UTC |
	|         | old-k8s-version-20220921220722-10174              |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                 |         |         |                     |                     |
	| start   | -p                                                | old-k8s-version-20220921220722-10174            | jenkins | v1.27.0 | 21 Sep 22 22:09 UTC |                     |
	|         | old-k8s-version-20220921220722-10174              |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                 |                                                 |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                                                 |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                                                 |         |         |                     |                     |
	|         | --keep-context=false --driver=docker              |                                                 |         |         |                     |                     |
	|         |  --container-runtime=containerd                   |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                      |                                                 |         |         |                     |                     |
	| delete  | -p                                                | enable-default-cni-20220921215523-10174         | jenkins | v1.27.0 | 21 Sep 22 22:11 UTC | 21 Sep 22 22:11 UTC |
	|         | enable-default-cni-20220921215523-10174           |                                                 |         |         |                     |                     |
	| start   | -p                                                | default-k8s-different-port-20220921221118-10174 | jenkins | v1.27.0 | 21 Sep 22 22:11 UTC |                     |
	|         | default-k8s-different-port-20220921221118-10174   |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                 |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker             |                                                 |         |         |                     |                     |
	|         |  --container-runtime=containerd                   |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.2                      |                                                 |         |         |                     |                     |
	|---------|---------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/09/21 22:11:18
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.19.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0921 22:11:18.087901  251080 out.go:296] Setting OutFile to fd 1 ...
	I0921 22:11:18.088024  251080 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:11:18.088036  251080 out.go:309] Setting ErrFile to fd 2...
	I0921 22:11:18.088042  251080 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:11:18.088174  251080 root.go:333] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/bin
	I0921 22:11:18.088746  251080 out.go:303] Setting JSON to false
	I0921 22:11:18.090393  251080 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3229,"bootTime":1663795049,"procs":653,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1017-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0921 22:11:18.090456  251080 start.go:125] virtualization: kvm guest
	I0921 22:11:18.093408  251080 out.go:177] * [default-k8s-different-port-20220921221118-10174] minikube v1.27.0 on Ubuntu 20.04 (kvm/amd64)
	I0921 22:11:18.094844  251080 notify.go:214] Checking for updates...
	I0921 22:11:18.096337  251080 out.go:177]   - MINIKUBE_LOCATION=14995
	I0921 22:11:18.097775  251080 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0921 22:11:18.099219  251080 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
	I0921 22:11:18.100740  251080 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube
	I0921 22:11:18.102389  251080 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0921 22:11:18.104495  251080 config.go:180] Loaded profile config "embed-certs-20220921220439-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.2
	I0921 22:11:18.104651  251080 config.go:180] Loaded profile config "no-preload-20220921220832-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.2
	I0921 22:11:18.104807  251080 config.go:180] Loaded profile config "old-k8s-version-20220921220722-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0921 22:11:18.104881  251080 driver.go:365] Setting default libvirt URI to qemu:///system
	I0921 22:11:18.138312  251080 docker.go:137] docker version: linux-20.10.18
	I0921 22:11:18.138426  251080 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:11:18.232188  251080 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:true NGoroutines:57 SystemTime:2022-09-21 22:11:18.15986917 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1017-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.18 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientI
nfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0921 22:11:18.232324  251080 docker.go:254] overlay module found
	I0921 22:11:18.234351  251080 out.go:177] * Using the docker driver based on user configuration
	I0921 22:11:18.235767  251080 start.go:284] selected driver: docker
	I0921 22:11:18.235790  251080 start.go:808] validating driver "docker" against <nil>
	I0921 22:11:18.235809  251080 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0921 22:11:18.236643  251080 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:11:18.330559  251080 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:true NGoroutines:57 SystemTime:2022-09-21 22:11:18.257769036 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1017-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.18 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0921 22:11:18.330687  251080 start_flags.go:302] no existing cluster config was found, will generate one from the flags 
	I0921 22:11:18.330876  251080 start_flags.go:867] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0921 22:11:18.332978  251080 out.go:177] * Using Docker driver with root privileges
	I0921 22:11:18.334347  251080 cni.go:95] Creating CNI manager for ""
	I0921 22:11:18.334364  251080 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0921 22:11:18.334381  251080 start_flags.go:311] Found "CNI" CNI - setting NetworkPlugin=cni
	I0921 22:11:18.334405  251080 start_flags.go:316] config:
	{Name:default-k8s-different-port-20220921221118-10174 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:default-k8s-different-port-20220921221118-10174 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDo
main:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/soc
ket_vmnet}
	I0921 22:11:18.336049  251080 out.go:177] * Starting control plane node default-k8s-different-port-20220921221118-10174 in cluster default-k8s-different-port-20220921221118-10174
	I0921 22:11:18.337335  251080 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0921 22:11:18.338625  251080 out.go:177] * Pulling base image ...
	I0921 22:11:18.339915  251080 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime containerd
	I0921 22:11:18.339961  251080 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.2-containerd-overlay2-amd64.tar.lz4
	I0921 22:11:18.339976  251080 cache.go:57] Caching tarball of preloaded images
	I0921 22:11:18.340010  251080 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	I0921 22:11:18.340234  251080 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0921 22:11:18.340259  251080 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.2 on containerd
	I0921 22:11:18.340397  251080 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/config.json ...
	I0921 22:11:18.340430  251080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/config.json: {Name:mk68817f4bf887721f92775083cbcee80d5fb68a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:11:18.367818  251080 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon, skipping pull
	I0921 22:11:18.367843  251080 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c exists in daemon, skipping load
	I0921 22:11:18.367856  251080 cache.go:208] Successfully downloaded all kic artifacts
	I0921 22:11:18.367892  251080 start.go:364] acquiring machines lock for default-k8s-different-port-20220921221118-10174: {Name:mk6a2906d520bc1db61074ef435cf249d094e940 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:11:18.368018  251080 start.go:368] acquired machines lock for "default-k8s-different-port-20220921221118-10174" in 101.344µs
	I0921 22:11:18.368055  251080 start.go:93] Provisioning new machine with config: &{Name:default-k8s-different-port-20220921221118-10174 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:default-k8s-different-port-20220921221118-10174
Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8444 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0921 22:11:18.368157  251080 start.go:125] createHost starting for "" (driver="docker")
	I0921 22:11:17.425811  247121 kubeadm.go:778] kubelet initialised
	I0921 22:11:17.425835  247121 kubeadm.go:779] duration metric: took 58.431682599s waiting for restarted kubelet to initialise ...
	I0921 22:11:17.425842  247121 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0921 22:11:17.430135  247121 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-ft4dg" in "kube-system" namespace to be "Ready" ...
	I0921 22:11:17.434236  247121 pod_ready.go:92] pod "coredns-5644d7b6d9-ft4dg" in "kube-system" namespace has status "Ready":"True"
	I0921 22:11:17.434255  247121 pod_ready.go:81] duration metric: took 4.0995ms waiting for pod "coredns-5644d7b6d9-ft4dg" in "kube-system" namespace to be "Ready" ...
	I0921 22:11:17.434264  247121 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-mvb9z" in "kube-system" namespace to be "Ready" ...
	I0921 22:11:17.438049  247121 pod_ready.go:92] pod "coredns-5644d7b6d9-mvb9z" in "kube-system" namespace has status "Ready":"True"
	I0921 22:11:17.438070  247121 pod_ready.go:81] duration metric: took 3.799088ms waiting for pod "coredns-5644d7b6d9-mvb9z" in "kube-system" namespace to be "Ready" ...
	I0921 22:11:17.438084  247121 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-20220921220722-10174" in "kube-system" namespace to be "Ready" ...
	I0921 22:11:17.441889  247121 pod_ready.go:92] pod "etcd-old-k8s-version-20220921220722-10174" in "kube-system" namespace has status "Ready":"True"
	I0921 22:11:17.441906  247121 pod_ready.go:81] duration metric: took 3.813836ms waiting for pod "etcd-old-k8s-version-20220921220722-10174" in "kube-system" namespace to be "Ready" ...
	I0921 22:11:17.441918  247121 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-20220921220722-10174" in "kube-system" namespace to be "Ready" ...
	I0921 22:11:17.445604  247121 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-20220921220722-10174" in "kube-system" namespace has status "Ready":"True"
	I0921 22:11:17.445626  247121 pod_ready.go:81] duration metric: took 3.699251ms waiting for pod "kube-apiserver-old-k8s-version-20220921220722-10174" in "kube-system" namespace to be "Ready" ...
	I0921 22:11:17.445637  247121 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-20220921220722-10174" in "kube-system" namespace to be "Ready" ...
	I0921 22:11:17.825354  247121 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-20220921220722-10174" in "kube-system" namespace has status "Ready":"True"
	I0921 22:11:17.825379  247121 pod_ready.go:81] duration metric: took 379.733387ms waiting for pod "kube-controller-manager-old-k8s-version-20220921220722-10174" in "kube-system" namespace to be "Ready" ...
	I0921 22:11:17.825389  247121 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-fxg44" in "kube-system" namespace to be "Ready" ...
	I0921 22:11:18.225967  247121 pod_ready.go:92] pod "kube-proxy-fxg44" in "kube-system" namespace has status "Ready":"True"
	I0921 22:11:18.225996  247121 pod_ready.go:81] duration metric: took 400.60033ms waiting for pod "kube-proxy-fxg44" in "kube-system" namespace to be "Ready" ...
	I0921 22:11:18.226010  247121 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-20220921220722-10174" in "kube-system" namespace to be "Ready" ...
	I0921 22:11:18.625047  247121 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-20220921220722-10174" in "kube-system" namespace has status "Ready":"True"
	I0921 22:11:18.625076  247121 pod_ready.go:81] duration metric: took 399.057463ms waiting for pod "kube-scheduler-old-k8s-version-20220921220722-10174" in "kube-system" namespace to be "Ready" ...
	I0921 22:11:18.625094  247121 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace to be "Ready" ...
	I0921 22:11:18.837224  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:11:21.337131  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:11:18.370528  251080 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0921 22:11:18.370720  251080 start.go:159] libmachine.API.Create for "default-k8s-different-port-20220921221118-10174" (driver="docker")
	I0921 22:11:18.370744  251080 client.go:168] LocalClient.Create starting
	I0921 22:11:18.370817  251080 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem
	I0921 22:11:18.370845  251080 main.go:134] libmachine: Decoding PEM data...
	I0921 22:11:18.370861  251080 main.go:134] libmachine: Parsing certificate...
	I0921 22:11:18.370925  251080 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/cert.pem
	I0921 22:11:18.370944  251080 main.go:134] libmachine: Decoding PEM data...
	I0921 22:11:18.370953  251080 main.go:134] libmachine: Parsing certificate...
	I0921 22:11:18.371236  251080 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220921221118-10174 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:11:18.395515  251080 cli_runner.go:211] docker network inspect default-k8s-different-port-20220921221118-10174 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:11:18.395579  251080 network_create.go:272] running [docker network inspect default-k8s-different-port-20220921221118-10174] to gather additional debugging logs...
	I0921 22:11:18.395600  251080 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220921221118-10174
	W0921 22:11:18.419547  251080 cli_runner.go:211] docker network inspect default-k8s-different-port-20220921221118-10174 returned with exit code 1
	I0921 22:11:18.419579  251080 network_create.go:275] error running [docker network inspect default-k8s-different-port-20220921221118-10174]: docker network inspect default-k8s-different-port-20220921221118-10174: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: default-k8s-different-port-20220921221118-10174
	I0921 22:11:18.419591  251080 network_create.go:277] output of [docker network inspect default-k8s-different-port-20220921221118-10174]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: default-k8s-different-port-20220921221118-10174
	
	** /stderr **
	I0921 22:11:18.419643  251080 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 22:11:18.444258  251080 network.go:241] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-b7c23e57d062 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:a3:39:9d:03}}
	I0921 22:11:18.445274  251080 network.go:241] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-bfa8cb3d5f9b IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:8c:39:36:0c}}
	I0921 22:11:18.446196  251080 network.go:241] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName:br-e71aa30fd3ac IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:7a:b1:c8:c1}}
	I0921 22:11:18.447244  251080 network.go:241] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName:br-4f93bc2f061a IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:ca:b2:42:ce}}
	I0921 22:11:18.448755  251080 network.go:290] reserving subnet 192.168.85.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.85.0:0xc00012cb10] misses:0}
	I0921 22:11:18.448802  251080 network.go:236] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 22:11:18.448826  251080 network_create.go:115] attempt to create docker network default-k8s-different-port-20220921221118-10174 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0921 22:11:18.448915  251080 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-different-port-20220921221118-10174 default-k8s-different-port-20220921221118-10174
	I0921 22:11:18.510820  251080 network_create.go:99] docker network default-k8s-different-port-20220921221118-10174 192.168.85.0/24 created
	I0921 22:11:18.510857  251080 kic.go:106] calculated static IP "192.168.85.2" for the "default-k8s-different-port-20220921221118-10174" container
	I0921 22:11:18.510919  251080 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0921 22:11:18.536329  251080 cli_runner.go:164] Run: docker volume create default-k8s-different-port-20220921221118-10174 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220921221118-10174 --label created_by.minikube.sigs.k8s.io=true
	I0921 22:11:18.561443  251080 oci.go:103] Successfully created a docker volume default-k8s-different-port-20220921221118-10174
	I0921 22:11:18.561538  251080 cli_runner.go:164] Run: docker run --rm --name default-k8s-different-port-20220921221118-10174-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220921221118-10174 --entrypoint /usr/bin/test -v default-k8s-different-port-20220921221118-10174:/var gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c -d /var/lib
	I0921 22:11:19.127923  251080 oci.go:107] Successfully prepared a docker volume default-k8s-different-port-20220921221118-10174
	I0921 22:11:19.127974  251080 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime containerd
	I0921 22:11:19.127994  251080 kic.go:179] Starting extracting preloaded images to volume ...
	I0921 22:11:19.128049  251080 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.2-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-different-port-20220921221118-10174:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c -I lz4 -xf /preloaded.tar -C /extractDir
	I0921 22:11:21.030814  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:11:23.030888  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:11:23.836773  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:11:25.837620  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:11:25.638147  251080 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.2-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-different-port-20220921221118-10174:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c -I lz4 -xf /preloaded.tar -C /extractDir: (6.510027893s)
	I0921 22:11:25.638182  251080 kic.go:188] duration metric: took 6.510186 seconds to extract preloaded images to volume
	W0921 22:11:25.638326  251080 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0921 22:11:25.638433  251080 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0921 22:11:25.732843  251080 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-different-port-20220921221118-10174 --name default-k8s-different-port-20220921221118-10174 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220921221118-10174 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-different-port-20220921221118-10174 --network default-k8s-different-port-20220921221118-10174 --ip 192.168.85.2 --volume default-k8s-different-port-20220921221118-10174:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c
	I0921 22:11:26.149451  251080 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221118-10174 --format={{.State.Running}}
	I0921 22:11:26.176098  251080 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221118-10174 --format={{.State.Status}}
	I0921 22:11:26.201313  251080 cli_runner.go:164] Run: docker exec default-k8s-different-port-20220921221118-10174 stat /var/lib/dpkg/alternatives/iptables
	I0921 22:11:26.261131  251080 oci.go:144] the created container "default-k8s-different-port-20220921221118-10174" has a running status.
	I0921 22:11:26.261169  251080 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa...
	I0921 22:11:26.437655  251080 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0921 22:11:26.519667  251080 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221118-10174 --format={{.State.Status}}
	I0921 22:11:26.549062  251080 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0921 22:11:26.549102  251080 kic_runner.go:114] Args: [docker exec --privileged default-k8s-different-port-20220921221118-10174 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0921 22:11:26.638792  251080 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221118-10174 --format={{.State.Status}}
	I0921 22:11:26.669847  251080 machine.go:88] provisioning docker machine ...
	I0921 22:11:26.669895  251080 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20220921221118-10174"
	I0921 22:11:26.669965  251080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:11:26.697039  251080 main.go:134] libmachine: Using SSH client type: native
	I0921 22:11:26.697198  251080 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ec520] 0x7ef6a0 <nil>  [] 0s} 127.0.0.1 49418 <nil> <nil>}
	I0921 22:11:26.697217  251080 main.go:134] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20220921221118-10174 && echo "default-k8s-different-port-20220921221118-10174" | sudo tee /etc/hostname
	I0921 22:11:26.837603  251080 main.go:134] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20220921221118-10174
	
	I0921 22:11:26.837685  251080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:11:26.861819  251080 main.go:134] libmachine: Using SSH client type: native
	I0921 22:11:26.861990  251080 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ec520] 0x7ef6a0 <nil>  [] 0s} 127.0.0.1 49418 <nil> <nil>}
	I0921 22:11:26.862027  251080 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20220921221118-10174' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20220921221118-10174/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20220921221118-10174' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0921 22:11:26.991431  251080 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0921 22:11:26.991457  251080 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube}
	I0921 22:11:26.991475  251080 ubuntu.go:177] setting up certificates
	I0921 22:11:26.991485  251080 provision.go:83] configureAuth start
	I0921 22:11:26.991540  251080 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220921221118-10174
	I0921 22:11:27.016270  251080 provision.go:138] copyHostCerts
	I0921 22:11:27.016322  251080 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cert.pem, removing ...
	I0921 22:11:27.016333  251080 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cert.pem
	I0921 22:11:27.016404  251080 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cert.pem (1123 bytes)
	I0921 22:11:27.016484  251080 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/key.pem, removing ...
	I0921 22:11:27.016495  251080 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/key.pem
	I0921 22:11:27.016521  251080 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/key.pem (1675 bytes)
	I0921 22:11:27.016571  251080 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.pem, removing ...
	I0921 22:11:27.016579  251080 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.pem
	I0921 22:11:27.016602  251080 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.pem (1078 bytes)
	I0921 22:11:27.016655  251080 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20220921221118-10174 san=[192.168.85.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20220921221118-10174]
	I0921 22:11:27.144451  251080 provision.go:172] copyRemoteCerts
	I0921 22:11:27.144512  251080 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0921 22:11:27.144545  251080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:11:27.170137  251080 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49418 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa Username:docker}
	I0921 22:11:27.266755  251080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0921 22:11:27.283950  251080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server.pem --> /etc/docker/server.pem (1310 bytes)
	I0921 22:11:27.300984  251080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0921 22:11:27.317480  251080 provision.go:86] duration metric: configureAuth took 325.986117ms
	I0921 22:11:27.317504  251080 ubuntu.go:193] setting minikube options for container-runtime
	I0921 22:11:27.317672  251080 config.go:180] Loaded profile config "default-k8s-different-port-20220921221118-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.2
	I0921 22:11:27.317689  251080 machine.go:91] provisioned docker machine in 647.81218ms
	I0921 22:11:27.317695  251080 client.go:171] LocalClient.Create took 8.9469458s
	I0921 22:11:27.317730  251080 start.go:167] duration metric: libmachine.API.Create for "default-k8s-different-port-20220921221118-10174" took 8.947008533s
	I0921 22:11:27.317744  251080 start.go:300] post-start starting for "default-k8s-different-port-20220921221118-10174" (driver="docker")
	I0921 22:11:27.317749  251080 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0921 22:11:27.317788  251080 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0921 22:11:27.317835  251080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:11:27.343342  251080 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49418 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa Username:docker}
	I0921 22:11:27.435531  251080 ssh_runner.go:195] Run: cat /etc/os-release
	I0921 22:11:27.438295  251080 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0921 22:11:27.438325  251080 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0921 22:11:27.438342  251080 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0921 22:11:27.438356  251080 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0921 22:11:27.438371  251080 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/addons for local assets ...
	I0921 22:11:27.438424  251080 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files for local assets ...
	I0921 22:11:27.438521  251080 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/101742.pem -> 101742.pem in /etc/ssl/certs
	I0921 22:11:27.438630  251080 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0921 22:11:27.445223  251080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/101742.pem --> /etc/ssl/certs/101742.pem (1708 bytes)
	I0921 22:11:27.462414  251080 start.go:303] post-start completed in 144.661014ms
	I0921 22:11:27.462741  251080 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220921221118-10174
	I0921 22:11:27.489387  251080 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/config.json ...
	I0921 22:11:27.489723  251080 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:11:27.489786  251080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:11:27.514068  251080 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49418 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa Username:docker}
	I0921 22:11:27.604197  251080 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:11:27.608399  251080 start.go:128] duration metric: createHost completed in 9.240229808s
	I0921 22:11:27.608420  251080 start.go:83] releasing machines lock for "default-k8s-different-port-20220921221118-10174", held for 9.240389159s
	I0921 22:11:27.608527  251080 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220921221118-10174
	I0921 22:11:27.634524  251080 ssh_runner.go:195] Run: systemctl --version
	I0921 22:11:27.634570  251080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:11:27.634600  251080 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0921 22:11:27.634691  251080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:11:27.660182  251080 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49418 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa Username:docker}
	I0921 22:11:27.660873  251080 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49418 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa Username:docker}
	I0921 22:11:27.749037  251080 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0921 22:11:27.781889  251080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0921 22:11:27.791675  251080 docker.go:188] disabling docker service ...
	I0921 22:11:27.791773  251080 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0921 22:11:27.809646  251080 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0921 22:11:27.818739  251080 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0921 22:11:27.897618  251080 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0921 22:11:27.972484  251080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0921 22:11:27.982099  251080 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0921 22:11:27.995156  251080 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "registry.k8s.io/pause:3.8"|' -i /etc/containerd/config.toml"
	I0921 22:11:28.003109  251080 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0921 22:11:28.011124  251080 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0921 22:11:28.018761  251080 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0921 22:11:28.026807  251080 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0921 22:11:28.034371  251080 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0921 22:11:28.041097  251080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0921 22:11:28.122123  251080 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0921 22:11:28.202854  251080 start.go:450] Will wait 60s for socket path /run/containerd/containerd.sock
	I0921 22:11:28.202928  251080 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0921 22:11:28.206617  251080 start.go:471] Will wait 60s for crictl version
	I0921 22:11:28.206695  251080 ssh_runner.go:195] Run: sudo crictl version
	I0921 22:11:28.234745  251080 start.go:480] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.8
	RuntimeApiVersion:  v1alpha2
	I0921 22:11:28.234815  251080 ssh_runner.go:195] Run: containerd --version
	I0921 22:11:28.263806  251080 ssh_runner.go:195] Run: containerd --version
	I0921 22:11:28.295305  251080 out.go:177] * Preparing Kubernetes v1.25.2 on containerd 1.6.8 ...
	I0921 22:11:28.296662  251080 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220921221118-10174 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 22:11:28.320125  251080 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0921 22:11:28.323370  251080 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0921 22:11:28.333100  251080 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime containerd
	I0921 22:11:28.333171  251080 ssh_runner.go:195] Run: sudo crictl images --output json
	I0921 22:11:28.357788  251080 containerd.go:553] all images are preloaded for containerd runtime.
	I0921 22:11:28.357819  251080 containerd.go:467] Images already preloaded, skipping extraction
	I0921 22:11:28.357874  251080 ssh_runner.go:195] Run: sudo crictl images --output json
	I0921 22:11:28.381874  251080 containerd.go:553] all images are preloaded for containerd runtime.
	I0921 22:11:28.381894  251080 cache_images.go:84] Images are preloaded, skipping loading
	I0921 22:11:28.381937  251080 ssh_runner.go:195] Run: sudo crictl info
	I0921 22:11:28.408427  251080 cni.go:95] Creating CNI manager for ""
	I0921 22:11:28.408456  251080 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0921 22:11:28.408470  251080 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0921 22:11:28.408481  251080 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.25.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20220921221118-10174 NodeName:default-k8s-different-port-20220921221118-10174 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgr
oupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
	I0921 22:11:28.408605  251080 kubeadm.go:161] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "default-k8s-different-port-20220921221118-10174"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0921 22:11:28.408684  251080 kubeadm.go:962] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=default-k8s-different-port-20220921221118-10174 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.2 ClusterName:default-k8s-different-port-20220921221118-10174 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0921 22:11:28.408742  251080 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.2
	I0921 22:11:28.416363  251080 binaries.go:44] Found k8s binaries, skipping transfer
	I0921 22:11:28.416431  251080 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0921 22:11:28.423279  251080 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (540 bytes)
	I0921 22:11:28.435844  251080 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0921 22:11:28.448554  251080 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2076 bytes)
	I0921 22:11:28.461624  251080 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0921 22:11:28.464712  251080 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0921 22:11:28.474003  251080 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174 for IP: 192.168.85.2
	I0921 22:11:28.474126  251080 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.key
	I0921 22:11:28.474185  251080 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/proxy-client-ca.key
	I0921 22:11:28.474246  251080 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/client.key
	I0921 22:11:28.474266  251080 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/client.crt with IP's: []
	I0921 22:11:28.567465  251080 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/client.crt ...
	I0921 22:11:28.567491  251080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/client.crt: {Name:mk7f007abc18238b3f4d498b44323ac1c9a08dd4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:11:28.567699  251080 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/client.key ...
	I0921 22:11:28.567732  251080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/client.key: {Name:mk573406c706742430a89f6f7a356628c72d9a49 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:11:28.567860  251080 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/apiserver.key.43b9df8c
	I0921 22:11:28.567875  251080 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/apiserver.crt.43b9df8c with IP's: [192.168.85.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0921 22:11:28.821872  251080 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/apiserver.crt.43b9df8c ...
	I0921 22:11:28.821903  251080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/apiserver.crt.43b9df8c: {Name:mk6f9bf09d9a1574fea352675c579bd5b29a8c83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:11:28.822090  251080 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/apiserver.key.43b9df8c ...
	I0921 22:11:28.822105  251080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/apiserver.key.43b9df8c: {Name:mk02ae9ee31bcf5d402f8edd4ad6acaa82a351d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:11:28.822189  251080 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/apiserver.crt.43b9df8c -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/apiserver.crt
	I0921 22:11:28.822247  251080 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/apiserver.key.43b9df8c -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/apiserver.key
	I0921 22:11:28.822293  251080 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/proxy-client.key
	I0921 22:11:28.822308  251080 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/proxy-client.crt with IP's: []
	I0921 22:11:28.922715  251080 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/proxy-client.crt ...
	I0921 22:11:28.922741  251080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/proxy-client.crt: {Name:mkaf5c21db58b4a0b90357c15da03dae1abe71c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:11:28.922924  251080 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/proxy-client.key ...
	I0921 22:11:28.922938  251080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/proxy-client.key: {Name:mk91f1c41e1900ed0eb542cfae77ba7b1ff8febd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:11:28.923107  251080 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/10174.pem (1338 bytes)
	W0921 22:11:28.923145  251080 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/10174_empty.pem, impossibly tiny 0 bytes
	I0921 22:11:28.923157  251080 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca-key.pem (1679 bytes)
	I0921 22:11:28.923183  251080 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem (1078 bytes)
	I0921 22:11:28.923210  251080 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/cert.pem (1123 bytes)
	I0921 22:11:28.923233  251080 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/key.pem (1675 bytes)
	I0921 22:11:28.923271  251080 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/101742.pem (1708 bytes)
	I0921 22:11:28.923840  251080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0921 22:11:28.942334  251080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0921 22:11:28.959138  251080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0921 22:11:28.975925  251080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0921 22:11:28.992601  251080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0921 22:11:29.009145  251080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0921 22:11:29.025974  251080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0921 22:11:29.043889  251080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0921 22:11:29.061111  251080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/101742.pem --> /usr/share/ca-certificates/101742.pem (1708 bytes)
	I0921 22:11:29.078117  251080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0921 22:11:29.095326  251080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/10174.pem --> /usr/share/ca-certificates/10174.pem (1338 bytes)
	I0921 22:11:29.112457  251080 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0921 22:11:29.124660  251080 ssh_runner.go:195] Run: openssl version
	I0921 22:11:29.129304  251080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0921 22:11:29.136557  251080 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0921 22:11:29.139479  251080 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Sep 21 21:28 /usr/share/ca-certificates/minikubeCA.pem
	I0921 22:11:29.139517  251080 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0921 22:11:29.144088  251080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0921 22:11:29.151649  251080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10174.pem && ln -fs /usr/share/ca-certificates/10174.pem /etc/ssl/certs/10174.pem"
	I0921 22:11:29.158634  251080 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10174.pem
	I0921 22:11:29.161640  251080 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Sep 21 21:32 /usr/share/ca-certificates/10174.pem
	I0921 22:11:29.161682  251080 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10174.pem
	I0921 22:11:29.166192  251080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10174.pem /etc/ssl/certs/51391683.0"
	I0921 22:11:29.173529  251080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/101742.pem && ln -fs /usr/share/ca-certificates/101742.pem /etc/ssl/certs/101742.pem"
	I0921 22:11:29.181111  251080 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/101742.pem
	I0921 22:11:29.184130  251080 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Sep 21 21:32 /usr/share/ca-certificates/101742.pem
	I0921 22:11:29.184178  251080 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/101742.pem
	I0921 22:11:29.189023  251080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/101742.pem /etc/ssl/certs/3ec20f2e.0"
	I0921 22:11:29.196116  251080 kubeadm.go:396] StartCluster: {Name:default-k8s-different-port-20220921221118-10174 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:default-k8s-different-port-20220921221118-10174 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 22:11:29.196192  251080 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0921 22:11:29.196252  251080 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0921 22:11:29.220112  251080 cri.go:87] found id: ""
	I0921 22:11:29.220180  251080 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0921 22:11:29.227068  251080 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0921 22:11:29.234009  251080 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0921 22:11:29.234055  251080 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0921 22:11:29.240811  251080 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0921 22:11:29.240844  251080 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0921 22:11:29.281554  251080 kubeadm.go:317] [init] Using Kubernetes version: v1.25.2
	I0921 22:11:29.281632  251080 kubeadm.go:317] [preflight] Running pre-flight checks
	I0921 22:11:29.309304  251080 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I0921 22:11:29.309370  251080 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1017-gcp
	I0921 22:11:29.309403  251080 kubeadm.go:317] OS: Linux
	I0921 22:11:29.309445  251080 kubeadm.go:317] CGROUPS_CPU: enabled
	I0921 22:11:29.309491  251080 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I0921 22:11:29.309562  251080 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I0921 22:11:29.309615  251080 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I0921 22:11:29.309671  251080 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I0921 22:11:29.309719  251080 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I0921 22:11:29.309757  251080 kubeadm.go:317] CGROUPS_PIDS: enabled
	I0921 22:11:29.309798  251080 kubeadm.go:317] CGROUPS_HUGETLB: enabled
	I0921 22:11:29.309837  251080 kubeadm.go:317] CGROUPS_BLKIO: enabled
	I0921 22:11:29.374829  251080 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0921 22:11:29.374943  251080 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0921 22:11:29.375043  251080 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0921 22:11:29.498766  251080 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0921 22:11:25.530784  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:11:28.030733  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:11:30.031206  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:11:29.501998  251080 out.go:204]   - Generating certificates and keys ...
	I0921 22:11:29.502140  251080 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0921 22:11:29.502277  251080 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0921 22:11:29.597971  251080 kubeadm.go:317] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0921 22:11:29.835986  251080 kubeadm.go:317] [certs] Generating "front-proxy-ca" certificate and key
	I0921 22:11:30.089547  251080 kubeadm.go:317] [certs] Generating "front-proxy-client" certificate and key
	I0921 22:11:30.169634  251080 kubeadm.go:317] [certs] Generating "etcd/ca" certificate and key
	I0921 22:11:30.225195  251080 kubeadm.go:317] [certs] Generating "etcd/server" certificate and key
	I0921 22:11:30.225404  251080 kubeadm.go:317] [certs] etcd/server serving cert is signed for DNS names [default-k8s-different-port-20220921221118-10174 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0921 22:11:30.334625  251080 kubeadm.go:317] [certs] Generating "etcd/peer" certificate and key
	I0921 22:11:30.334942  251080 kubeadm.go:317] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-different-port-20220921221118-10174 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0921 22:11:30.454648  251080 kubeadm.go:317] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0921 22:11:30.667751  251080 kubeadm.go:317] [certs] Generating "apiserver-etcd-client" certificate and key
	I0921 22:11:30.842577  251080 kubeadm.go:317] [certs] Generating "sa" key and public key
	I0921 22:11:30.842710  251080 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0921 22:11:30.909448  251080 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0921 22:11:31.056256  251080 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0921 22:11:31.120718  251080 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0921 22:11:31.191075  251080 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0921 22:11:31.202857  251080 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0921 22:11:31.203759  251080 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0921 22:11:31.203851  251080 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I0921 22:11:31.284919  251080 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0921 22:11:28.336967  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:11:30.337024  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:11:31.287269  251080 out.go:204]   - Booting up control plane ...
	I0921 22:11:31.287395  251080 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0921 22:11:31.288963  251080 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0921 22:11:31.289889  251080 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0921 22:11:31.290600  251080 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0921 22:11:31.292356  251080 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0921 22:11:32.530623  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:11:35.030218  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:11:32.337947  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:11:34.836321  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:11:36.837370  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:11:37.294544  251080 kubeadm.go:317] [apiclient] All control plane components are healthy after 6.002117 seconds
	I0921 22:11:37.294700  251080 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0921 22:11:37.302999  251080 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0921 22:11:37.820634  251080 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs
	I0921 22:11:37.820909  251080 kubeadm.go:317] [mark-control-plane] Marking the node default-k8s-different-port-20220921221118-10174 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0921 22:11:38.328855  251080 kubeadm.go:317] [bootstrap-token] Using token: f60jp5.opo6lrzt47sur902
	I0921 22:11:38.330272  251080 out.go:204]   - Configuring RBAC rules ...
	I0921 22:11:38.330460  251080 kubeadm.go:317] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0921 22:11:38.335703  251080 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0921 22:11:38.340513  251080 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0921 22:11:38.342637  251080 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0921 22:11:38.344542  251080 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0921 22:11:38.346406  251080 kubeadm.go:317] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0921 22:11:38.353833  251080 kubeadm.go:317] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0921 22:11:38.556116  251080 kubeadm.go:317] [addons] Applied essential addon: CoreDNS
	I0921 22:11:38.780075  251080 kubeadm.go:317] [addons] Applied essential addon: kube-proxy
	I0921 22:11:38.781317  251080 kubeadm.go:317] 
	I0921 22:11:38.781428  251080 kubeadm.go:317] Your Kubernetes control-plane has initialized successfully!
	I0921 22:11:38.781465  251080 kubeadm.go:317] 
	I0921 22:11:38.781595  251080 kubeadm.go:317] To start using your cluster, you need to run the following as a regular user:
	I0921 22:11:38.781624  251080 kubeadm.go:317] 
	I0921 22:11:38.781667  251080 kubeadm.go:317]   mkdir -p $HOME/.kube
	I0921 22:11:38.781749  251080 kubeadm.go:317]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0921 22:11:38.781810  251080 kubeadm.go:317]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0921 22:11:38.781842  251080 kubeadm.go:317] 
	I0921 22:11:38.781971  251080 kubeadm.go:317] Alternatively, if you are the root user, you can run:
	I0921 22:11:38.781987  251080 kubeadm.go:317] 
	I0921 22:11:38.782044  251080 kubeadm.go:317]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0921 22:11:38.782061  251080 kubeadm.go:317] 
	I0921 22:11:38.782142  251080 kubeadm.go:317] You should now deploy a pod network to the cluster.
	I0921 22:11:38.782239  251080 kubeadm.go:317] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0921 22:11:38.782336  251080 kubeadm.go:317]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0921 22:11:38.782349  251080 kubeadm.go:317] 
	I0921 22:11:38.782445  251080 kubeadm.go:317] You can now join any number of control-plane nodes by copying certificate authorities
	I0921 22:11:38.782532  251080 kubeadm.go:317] and service account keys on each node and then running the following as root:
	I0921 22:11:38.782539  251080 kubeadm.go:317] 
	I0921 22:11:38.782640  251080 kubeadm.go:317]   kubeadm join control-plane.minikube.internal:8444 --token f60jp5.opo6lrzt47sur902 \
	I0921 22:11:38.782760  251080 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:b419e756666ce2e30bdac31039f21ad7242d26e2b9c03a55c5bc6fe8dcfddcf7 \
	I0921 22:11:38.782786  251080 kubeadm.go:317] 	--control-plane 
	I0921 22:11:38.782792  251080 kubeadm.go:317] 
	I0921 22:11:38.782886  251080 kubeadm.go:317] Then you can join any number of worker nodes by running the following on each as root:
	I0921 22:11:38.782893  251080 kubeadm.go:317] 
	I0921 22:11:38.782985  251080 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8444 --token f60jp5.opo6lrzt47sur902 \
	I0921 22:11:38.783105  251080 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:b419e756666ce2e30bdac31039f21ad7242d26e2b9c03a55c5bc6fe8dcfddcf7 
	I0921 22:11:38.785995  251080 kubeadm.go:317] W0921 22:11:29.273642     735 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0921 22:11:38.786254  251080 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1017-gcp\n", err: exit status 1
	I0921 22:11:38.786399  251080 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0921 22:11:38.786445  251080 cni.go:95] Creating CNI manager for ""
	I0921 22:11:38.786461  251080 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0921 22:11:38.788308  251080 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0921 22:11:37.030744  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:11:39.030828  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:11:39.337094  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:11:41.836184  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:11:38.789713  251080 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0921 22:11:38.793640  251080 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.2/kubectl ...
	I0921 22:11:38.793660  251080 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0921 22:11:38.808403  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0921 22:11:39.596042  251080 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0921 22:11:39.596097  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:39.596114  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl label nodes minikube.k8s.io/version=v1.27.0 minikube.k8s.io/commit=937c68716dfaac5b5ffa3b6655158d5d3472b8c4 minikube.k8s.io/name=default-k8s-different-port-20220921221118-10174 minikube.k8s.io/updated_at=2022_09_21T22_11_39_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:39.690430  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:39.696472  251080 ops.go:34] apiserver oom_adj: -16
	I0921 22:11:40.252956  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:40.753124  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:41.252898  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:41.752958  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:42.252749  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:42.752944  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:41.530878  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:11:44.030810  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:11:43.836287  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:11:45.837347  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:11:43.252940  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:43.752934  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:44.252898  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:44.752478  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:45.252903  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:45.752467  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:46.253256  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:46.752683  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:47.252892  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:47.752682  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:46.530737  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:11:49.030362  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:11:48.335973  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:11:50.336273  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:11:48.252790  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:48.752428  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:49.252346  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:49.753263  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:50.252919  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:50.752432  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:51.252537  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:51.752927  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:51.892900  251080 kubeadm.go:1067] duration metric: took 12.296861621s to wait for elevateKubeSystemPrivileges.
	I0921 22:11:51.892930  251080 kubeadm.go:398] StartCluster complete in 22.696819381s
	I0921 22:11:51.892946  251080 settings.go:142] acquiring lock: {Name:mk2e017b0c75e33bad2b3a546d00214b38a21694 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:11:51.893033  251080 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
	I0921 22:11:51.894853  251080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig: {Name:mkdec5faf1bb182b50a3cd5458b352f38075dc15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:11:52.410836  251080 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20220921221118-10174" rescaled to 1
	I0921 22:11:52.410900  251080 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0921 22:11:52.412753  251080 out.go:177] * Verifying Kubernetes components...
	I0921 22:11:52.410955  251080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0921 22:11:52.410996  251080 addons.go:412] enableAddons start: toEnable=map[], additional=[]
	I0921 22:11:52.411177  251080 config.go:180] Loaded profile config "default-k8s-different-port-20220921221118-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.2
	I0921 22:11:52.414055  251080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0921 22:11:52.414125  251080 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-different-port-20220921221118-10174"
	I0921 22:11:52.414149  251080 addons.go:153] Setting addon storage-provisioner=true in "default-k8s-different-port-20220921221118-10174"
	W0921 22:11:52.414157  251080 addons.go:162] addon storage-provisioner should already be in state true
	I0921 22:11:52.414160  251080 addons.go:65] Setting default-storageclass=true in profile "default-k8s-different-port-20220921221118-10174"
	I0921 22:11:52.414177  251080 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20220921221118-10174"
	I0921 22:11:52.414210  251080 host.go:66] Checking if "default-k8s-different-port-20220921221118-10174" exists ...
	I0921 22:11:52.414507  251080 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221118-10174 --format={{.State.Status}}
	I0921 22:11:52.414719  251080 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221118-10174 --format={{.State.Status}}
	I0921 22:11:52.453309  251080 addons.go:153] Setting addon default-storageclass=true in "default-k8s-different-port-20220921221118-10174"
	W0921 22:11:52.453343  251080 addons.go:162] addon default-storageclass should already be in state true
	I0921 22:11:52.453370  251080 host.go:66] Checking if "default-k8s-different-port-20220921221118-10174" exists ...
	I0921 22:11:52.456214  251080 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0921 22:11:52.453863  251080 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221118-10174 --format={{.State.Status}}
	I0921 22:11:52.457793  251080 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0921 22:11:52.457817  251080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0921 22:11:52.457870  251080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:11:52.489924  251080 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0921 22:11:52.489952  251080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0921 22:11:52.490001  251080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:11:52.499827  251080 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49418 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa Username:docker}
	I0921 22:11:52.521036  251080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0921 22:11:52.523074  251080 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20220921221118-10174" to be "Ready" ...
	I0921 22:11:52.524139  251080 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49418 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa Username:docker}
	I0921 22:11:52.694618  251080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0921 22:11:52.698974  251080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0921 22:11:53.100405  251080 start.go:810] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS
	I0921 22:11:53.285087  251080 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0921 22:11:51.030761  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:11:53.030886  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:11:55.031051  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:11:52.337131  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:11:54.836927  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:11:53.286473  251080 addons.go:414] enableAddons completed in 875.500055ms
	I0921 22:11:54.531242  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:11:56.531286  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:11:57.531654  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:12:00.031155  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:11:57.336103  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:11:59.336841  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:01.836213  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:11:59.030486  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:01.030832  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:03.031401  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:02.530785  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:12:05.030896  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:12:03.836938  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:06.336349  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:05.530847  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:08.030644  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:07.031537  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:12:09.530730  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:12:08.837257  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:11.336377  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:10.031510  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:12.531037  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:11.531989  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:12:14.030729  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:12:13.837027  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:16.336212  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:14.531388  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:17.030653  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:16.031195  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:12:18.530817  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:12:18.837013  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:21.336931  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:19.531491  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:22.030834  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:20.531145  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:12:23.030753  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:12:25.033218  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:12:23.836124  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:25.836792  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:24.530794  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:27.030911  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:27.530979  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:12:30.030328  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:12:28.337008  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:30.836665  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:29.031263  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:31.531092  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:32.031104  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:12:34.530719  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:12:32.836819  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:35.336220  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:34.030989  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:36.530772  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:37.031009  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:12:39.530361  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:12:37.336820  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:39.837041  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:39.030837  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:41.030918  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:43.031395  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:41.530781  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:12:43.531407  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:12:42.336354  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:44.836264  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:46.836827  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:45.530777  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:47.531054  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:46.030030  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:12:48.030327  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:12:50.030839  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:12:49.336608  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:51.336789  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:49.531276  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:51.531467  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:52.031223  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:12:54.032232  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:12:53.836292  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:55.836687  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:54.030859  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:56.530994  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:56.531050  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:12:59.030372  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:12:57.836753  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:59.836812  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:59.030908  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:13:01.031504  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:13:01.031167  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:13:03.531055  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:13:02.336234  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:13:04.337102  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:13:06.836799  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:13:03.531248  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:13:06.031222  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:13:06.030340  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:13:08.030411  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:13:10.031005  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:13:08.836960  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:13:11.336661  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:13:08.530717  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:13:10.531407  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:13:13.030797  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:13:13.338612  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:13:13.338639  242109 node_ready.go:38] duration metric: took 4m0.008551222s waiting for node "no-preload-20220921220832-10174" to be "Ready" ...
	I0921 22:13:13.340854  242109 out.go:177] 
	W0921 22:13:13.342210  242109 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0921 22:13:13.342226  242109 out.go:239] * 
	W0921 22:13:13.342954  242109 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0921 22:13:13.344170  242109 out.go:177] 
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	4319d6b905197       d921cee849482       About a minute ago   Running             kindnet-cni               1                   842ff71db5ddd
	9b4bcc68b201c       d921cee849482       3 minutes ago        Exited              kindnet-cni               0                   842ff71db5ddd
	3eefbcb898b09       1c7d8c51823b5       4 minutes ago        Running             kube-proxy                0                   3a052127f22d7
	6a4b91f0531d1       a8a176a5d5d69       4 minutes ago        Running             etcd                      0                   9756bf60beb90
	a9c3d39d9942f       ca0ea1ee3cfd3       4 minutes ago        Running             kube-scheduler            0                   26094dc69faf0
	b69529a7e224f       97801f8394908       4 minutes ago        Running             kube-apiserver            0                   6c9070db9088c
	b1a22ede66e31       dbfceb93c69b6       4 minutes ago        Running             kube-controller-manager   0                   649c092f0b0ca
	
	* 
	* ==> containerd <==
	* -- Logs begin at Wed 2022-09-21 22:08:33 UTC, end at Wed 2022-09-21 22:13:14 UTC. --
	Sep 21 22:09:12 no-preload-20220921220832-10174 containerd[512]: time="2022-09-21T22:09:12.891077328Z" level=info msg="CreateContainer within sandbox \"3a052127f22d7639bab1f2d0ee866a6dc31202abb69b4b40cd731f590873cd82\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
	Sep 21 22:09:12 no-preload-20220921220832-10174 containerd[512]: time="2022-09-21T22:09:12.907229738Z" level=info msg="CreateContainer within sandbox \"3a052127f22d7639bab1f2d0ee866a6dc31202abb69b4b40cd731f590873cd82\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3eefbcb898b092a06a25d126ffaed982346f2b3691953b1cfaddc748fee80843\""
	Sep 21 22:09:12 no-preload-20220921220832-10174 containerd[512]: time="2022-09-21T22:09:12.907933089Z" level=info msg="StartContainer for \"3eefbcb898b092a06a25d126ffaed982346f2b3691953b1cfaddc748fee80843\""
	Sep 21 22:09:12 no-preload-20220921220832-10174 containerd[512]: time="2022-09-21T22:09:12.996113109Z" level=info msg="StartContainer for \"3eefbcb898b092a06a25d126ffaed982346f2b3691953b1cfaddc748fee80843\" returns successfully"
	Sep 21 22:09:13 no-preload-20220921220832-10174 containerd[512]: time="2022-09-21T22:09:13.091923521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-27cj5,Uid:90383218-a547-458a-8b5e-af84c9d2b017,Namespace:kube-system,Attempt:0,} returns sandbox id \"842ff71db5ddd12bfafd846824f000dbe00a410f568767bbd1c8fb5bdb20f51e\""
	Sep 21 22:09:13 no-preload-20220921220832-10174 containerd[512]: time="2022-09-21T22:09:13.093675765Z" level=info msg="PullImage \"kindest/kindnetd:v20220726-ed811e41\""
	Sep 21 22:09:13 no-preload-20220921220832-10174 containerd[512]: time="2022-09-21T22:09:13.095300075Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 21 22:09:13 no-preload-20220921220832-10174 containerd[512]: time="2022-09-21T22:09:13.982168794Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 21 22:09:15 no-preload-20220921220832-10174 containerd[512]: time="2022-09-21T22:09:15.812379798Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/kindest/kindnetd:v20220726-ed811e41,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	Sep 21 22:09:15 no-preload-20220921220832-10174 containerd[512]: time="2022-09-21T22:09:15.814500625Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d921cee8494827575ce8b9cc6cf7dae988b6378ce3f62217bf430467916529b9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	Sep 21 22:09:15 no-preload-20220921220832-10174 containerd[512]: time="2022-09-21T22:09:15.815966191Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/kindest/kindnetd:v20220726-ed811e41,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	Sep 21 22:09:15 no-preload-20220921220832-10174 containerd[512]: time="2022-09-21T22:09:15.817558502Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/kindest/kindnetd@sha256:e2d4d675dcf28a90102ad5219b75c5a0ee096c4321247dfae31dd1467611a9fb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	Sep 21 22:09:15 no-preload-20220921220832-10174 containerd[512]: time="2022-09-21T22:09:15.817837588Z" level=info msg="PullImage \"kindest/kindnetd:v20220726-ed811e41\" returns image reference \"sha256:d921cee8494827575ce8b9cc6cf7dae988b6378ce3f62217bf430467916529b9\""
	Sep 21 22:09:15 no-preload-20220921220832-10174 containerd[512]: time="2022-09-21T22:09:15.819898787Z" level=info msg="CreateContainer within sandbox \"842ff71db5ddd12bfafd846824f000dbe00a410f568767bbd1c8fb5bdb20f51e\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:0,}"
	Sep 21 22:09:15 no-preload-20220921220832-10174 containerd[512]: time="2022-09-21T22:09:15.831230283Z" level=info msg="CreateContainer within sandbox \"842ff71db5ddd12bfafd846824f000dbe00a410f568767bbd1c8fb5bdb20f51e\" for &ContainerMetadata{Name:kindnet-cni,Attempt:0,} returns container id \"9b4bcc68b201c6e0c9847ec783771a1871359f04fea4ce921778c106f7361939\""
	Sep 21 22:09:15 no-preload-20220921220832-10174 containerd[512]: time="2022-09-21T22:09:15.831861830Z" level=info msg="StartContainer for \"9b4bcc68b201c6e0c9847ec783771a1871359f04fea4ce921778c106f7361939\""
	Sep 21 22:09:15 no-preload-20220921220832-10174 containerd[512]: time="2022-09-21T22:09:15.994199676Z" level=info msg="StartContainer for \"9b4bcc68b201c6e0c9847ec783771a1871359f04fea4ce921778c106f7361939\" returns successfully"
	Sep 21 22:11:56 no-preload-20220921220832-10174 containerd[512]: time="2022-09-21T22:11:56.541708926Z" level=info msg="shim disconnected" id=9b4bcc68b201c6e0c9847ec783771a1871359f04fea4ce921778c106f7361939
	Sep 21 22:11:56 no-preload-20220921220832-10174 containerd[512]: time="2022-09-21T22:11:56.541780429Z" level=warning msg="cleaning up after shim disconnected" id=9b4bcc68b201c6e0c9847ec783771a1871359f04fea4ce921778c106f7361939 namespace=k8s.io
	Sep 21 22:11:56 no-preload-20220921220832-10174 containerd[512]: time="2022-09-21T22:11:56.541792175Z" level=info msg="cleaning up dead shim"
	Sep 21 22:11:56 no-preload-20220921220832-10174 containerd[512]: time="2022-09-21T22:11:56.551710796Z" level=warning msg="cleanup warnings time=\"2022-09-21T22:11:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2564 runtime=io.containerd.runc.v2\n"
	Sep 21 22:11:57 no-preload-20220921220832-10174 containerd[512]: time="2022-09-21T22:11:57.509646896Z" level=info msg="CreateContainer within sandbox \"842ff71db5ddd12bfafd846824f000dbe00a410f568767bbd1c8fb5bdb20f51e\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:1,}"
	Sep 21 22:11:57 no-preload-20220921220832-10174 containerd[512]: time="2022-09-21T22:11:57.524349095Z" level=info msg="CreateContainer within sandbox \"842ff71db5ddd12bfafd846824f000dbe00a410f568767bbd1c8fb5bdb20f51e\" for &ContainerMetadata{Name:kindnet-cni,Attempt:1,} returns container id \"4319d6b9051970d5c21e870c71eeb9d7c765b4c15cf0b862381f53977a9cc221\""
	Sep 21 22:11:57 no-preload-20220921220832-10174 containerd[512]: time="2022-09-21T22:11:57.525037073Z" level=info msg="StartContainer for \"4319d6b9051970d5c21e870c71eeb9d7c765b4c15cf0b862381f53977a9cc221\""
	Sep 21 22:11:57 no-preload-20220921220832-10174 containerd[512]: time="2022-09-21T22:11:57.679545520Z" level=info msg="StartContainer for \"4319d6b9051970d5c21e870c71eeb9d7c765b4c15cf0b862381f53977a9cc221\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-20220921220832-10174
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-20220921220832-10174
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=937c68716dfaac5b5ffa3b6655158d5d3472b8c4
	                    minikube.k8s.io/name=no-preload-20220921220832-10174
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_09_21T22_08_58_0700
	                    minikube.k8s.io/version=v1.27.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 21 Sep 2022 22:08:55 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-20220921220832-10174
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 21 Sep 2022 22:13:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 21 Sep 2022 22:09:28 +0000   Wed, 21 Sep 2022 22:08:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 21 Sep 2022 22:09:28 +0000   Wed, 21 Sep 2022 22:08:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 21 Sep 2022 22:09:28 +0000   Wed, 21 Sep 2022 22:08:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 21 Sep 2022 22:09:28 +0000   Wed, 21 Sep 2022 22:08:52 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-20220921220832-10174
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871748Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871748Ki
	  pods:               110
	System Info:
	  Machine ID:                 40026c506a9948afaae3b0fda0a61c83
	  System UUID:                44c6c62a-5061-4f07-a2f0-9d563da1b73e
	  Boot ID:                    878f3e16-9143-42f3-b848-1a740c7f26bf
	  Kernel Version:             5.15.0-1017-gcp
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.6.8
	  Kubelet Version:            v1.25.2
	  Kube-Proxy Version:         v1.25.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-no-preload-20220921220832-10174                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         4m16s
	  kube-system                 kindnet-27cj5                                              100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m3s
	  kube-system                 kube-apiserver-no-preload-20220921220832-10174             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-controller-manager-no-preload-20220921220832-10174    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m16s
	  kube-system                 kube-proxy-nxpf5                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m3s
	  kube-system                 kube-scheduler-no-preload-20220921220832-10174             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m1s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  4m24s (x5 over 4m24s)  kubelet          Node no-preload-20220921220832-10174 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m24s (x5 over 4m24s)  kubelet          Node no-preload-20220921220832-10174 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m24s (x3 over 4m24s)  kubelet          Node no-preload-20220921220832-10174 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m16s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m16s                  kubelet          Node no-preload-20220921220832-10174 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m16s                  kubelet          Node no-preload-20220921220832-10174 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m16s                  kubelet          Node no-preload-20220921220832-10174 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m16s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m4s                   node-controller  Node no-preload-20220921220832-10174 event: Registered Node no-preload-20220921220832-10174 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000005] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +2.959863] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.003881] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.023897] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[Sep21 22:10] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.005087] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[Sep21 22:11] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +2.967845] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.031851] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.027935] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +2.943864] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.019893] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.023889] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	
	* 
	* ==> etcd [6a4b91f0531d15b584576f58078373b62517efb45934f9088111d572c5de1646] <==
	* {"level":"info","ts":"2022-09-21T22:09:12.389Z","caller":"traceutil/trace.go:171","msg":"trace[482272922] transaction","detail":"{read_only:false; response_revision:334; number_of_response:1; }","duration":"278.191075ms","start":"2022-09-21T22:09:12.111Z","end":"2022-09-21T22:09:12.389Z","steps":["trace[482272922] 'process raft request'  (duration: 278.089757ms)"],"step_count":1}
	{"level":"info","ts":"2022-09-21T22:09:12.389Z","caller":"traceutil/trace.go:171","msg":"trace[929886050] transaction","detail":"{read_only:false; response_revision:336; number_of_response:1; }","duration":"274.545516ms","start":"2022-09-21T22:09:12.115Z","end":"2022-09-21T22:09:12.389Z","steps":["trace[929886050] 'process raft request'  (duration: 274.393041ms)"],"step_count":1}
	{"level":"info","ts":"2022-09-21T22:09:12.389Z","caller":"traceutil/trace.go:171","msg":"trace[281568925] transaction","detail":"{read_only:false; response_revision:335; number_of_response:1; }","duration":"276.956319ms","start":"2022-09-21T22:09:12.112Z","end":"2022-09-21T22:09:12.389Z","steps":["trace[281568925] 'process raft request'  (duration: 276.78745ms)"],"step_count":1}
	{"level":"info","ts":"2022-09-21T22:09:12.390Z","caller":"traceutil/trace.go:171","msg":"trace[34397509] linearizableReadLoop","detail":"{readStateIndex:349; appliedIndex:344; }","duration":"274.143178ms","start":"2022-09-21T22:09:12.115Z","end":"2022-09-21T22:09:12.390Z","steps":["trace[34397509] 'read index received'  (duration: 184.947899ms)","trace[34397509] 'applied index is now lower than readState.Index'  (duration: 89.194459ms)"],"step_count":2}
	{"level":"info","ts":"2022-09-21T22:09:12.390Z","caller":"traceutil/trace.go:171","msg":"trace[86480274] transaction","detail":"{read_only:false; response_revision:338; number_of_response:1; }","duration":"267.792108ms","start":"2022-09-21T22:09:12.122Z","end":"2022-09-21T22:09:12.390Z","steps":["trace[86480274] 'process raft request'  (duration: 267.534906ms)"],"step_count":1}
	{"level":"info","ts":"2022-09-21T22:09:12.390Z","caller":"traceutil/trace.go:171","msg":"trace[1151117429] transaction","detail":"{read_only:false; response_revision:337; number_of_response:1; }","duration":"270.92859ms","start":"2022-09-21T22:09:12.119Z","end":"2022-09-21T22:09:12.390Z","steps":["trace[1151117429] 'process raft request'  (duration: 270.611323ms)"],"step_count":1}
	{"level":"warn","ts":"2022-09-21T22:09:12.390Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"274.275236ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/kube-system/kube-proxy\" ","response":"range_response_count:1 size:2878"}
	{"level":"info","ts":"2022-09-21T22:09:12.390Z","caller":"traceutil/trace.go:171","msg":"trace[490030946] range","detail":"{range_begin:/registry/daemonsets/kube-system/kube-proxy; range_end:; response_count:1; response_revision:338; }","duration":"274.326664ms","start":"2022-09-21T22:09:12.115Z","end":"2022-09-21T22:09:12.390Z","steps":["trace[490030946] 'agreement among raft nodes before linearized reading'  (duration: 274.229021ms)"],"step_count":1}
	{"level":"warn","ts":"2022-09-21T22:09:12.398Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"281.551371ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/kube-system/kindnet\" ","response":"range_response_count:1 size:4601"}
	{"level":"info","ts":"2022-09-21T22:09:12.398Z","caller":"traceutil/trace.go:171","msg":"trace[21787300] range","detail":"{range_begin:/registry/daemonsets/kube-system/kindnet; range_end:; response_count:1; response_revision:338; }","duration":"281.619065ms","start":"2022-09-21T22:09:12.116Z","end":"2022-09-21T22:09:12.398Z","steps":["trace[21787300] 'agreement among raft nodes before linearized reading'  (duration: 281.515501ms)"],"step_count":1}
	{"level":"info","ts":"2022-09-21T22:09:12.516Z","caller":"traceutil/trace.go:171","msg":"trace[1947667569] linearizableReadLoop","detail":"{readStateIndex:355; appliedIndex:355; }","duration":"118.331899ms","start":"2022-09-21T22:09:12.397Z","end":"2022-09-21T22:09:12.516Z","steps":["trace[1947667569] 'read index received'  (duration: 118.31338ms)","trace[1947667569] 'applied index is now lower than readState.Index'  (duration: 15.637µs)"],"step_count":2}
	{"level":"warn","ts":"2022-09-21T22:09:12.518Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"125.444129ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kindnet-27cj5\" ","response":"range_response_count:1 size:3714"}
	{"level":"info","ts":"2022-09-21T22:09:12.518Z","caller":"traceutil/trace.go:171","msg":"trace[877726603] range","detail":"{range_begin:/registry/pods/kube-system/kindnet-27cj5; range_end:; response_count:1; response_revision:342; }","duration":"125.520607ms","start":"2022-09-21T22:09:12.392Z","end":"2022-09-21T22:09:12.518Z","steps":["trace[877726603] 'agreement among raft nodes before linearized reading'  (duration: 123.409684ms)"],"step_count":1}
	{"level":"warn","ts":"2022-09-21T22:09:12.518Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"124.770564ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/kube-system/kube-proxy\" ","response":"range_response_count:1 size:2878"}
	{"level":"info","ts":"2022-09-21T22:09:12.518Z","caller":"traceutil/trace.go:171","msg":"trace[1569897163] range","detail":"{range_begin:/registry/daemonsets/kube-system/kube-proxy; range_end:; response_count:1; response_revision:342; }","duration":"124.848752ms","start":"2022-09-21T22:09:12.393Z","end":"2022-09-21T22:09:12.518Z","steps":["trace[1569897163] 'agreement among raft nodes before linearized reading'  (duration: 122.552703ms)"],"step_count":1}
	{"level":"info","ts":"2022-09-21T22:09:12.518Z","caller":"traceutil/trace.go:171","msg":"trace[584162130] transaction","detail":"{read_only:false; response_revision:345; number_of_response:1; }","duration":"116.491437ms","start":"2022-09-21T22:09:12.402Z","end":"2022-09-21T22:09:12.518Z","steps":["trace[584162130] 'process raft request'  (duration: 116.413393ms)"],"step_count":1}
	{"level":"info","ts":"2022-09-21T22:09:12.518Z","caller":"traceutil/trace.go:171","msg":"trace[1358578711] transaction","detail":"{read_only:false; response_revision:343; number_of_response:1; }","duration":"118.162741ms","start":"2022-09-21T22:09:12.400Z","end":"2022-09-21T22:09:12.518Z","steps":["trace[1358578711] 'process raft request'  (duration: 115.718872ms)"],"step_count":1}
	{"level":"warn","ts":"2022-09-21T22:09:12.518Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"119.950399ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kindnet-27cj5\" ","response":"range_response_count:1 size:3714"}
	{"level":"info","ts":"2022-09-21T22:09:12.518Z","caller":"traceutil/trace.go:171","msg":"trace[1353864366] range","detail":"{range_begin:/registry/pods/kube-system/kindnet-27cj5; range_end:; response_count:1; response_revision:346; }","duration":"119.996483ms","start":"2022-09-21T22:09:12.398Z","end":"2022-09-21T22:09:12.518Z","steps":["trace[1353864366] 'agreement among raft nodes before linearized reading'  (duration: 119.913158ms)"],"step_count":1}
	{"level":"info","ts":"2022-09-21T22:09:12.518Z","caller":"traceutil/trace.go:171","msg":"trace[677856182] transaction","detail":"{read_only:false; response_revision:346; number_of_response:1; }","duration":"116.412604ms","start":"2022-09-21T22:09:12.402Z","end":"2022-09-21T22:09:12.518Z","steps":["trace[677856182] 'process raft request'  (duration: 116.313434ms)"],"step_count":1}
	{"level":"warn","ts":"2022-09-21T22:09:12.518Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"116.884809ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/kube-system/kindnet\" ","response":"range_response_count:1 size:4601"}
	{"level":"info","ts":"2022-09-21T22:09:12.518Z","caller":"traceutil/trace.go:171","msg":"trace[39557479] transaction","detail":"{read_only:false; response_revision:344; number_of_response:1; }","duration":"117.099753ms","start":"2022-09-21T22:09:12.401Z","end":"2022-09-21T22:09:12.518Z","steps":["trace[39557479] 'process raft request'  (duration: 116.94402ms)"],"step_count":1}
	{"level":"info","ts":"2022-09-21T22:09:12.518Z","caller":"traceutil/trace.go:171","msg":"trace[1861985295] range","detail":"{range_begin:/registry/daemonsets/kube-system/kindnet; range_end:; response_count:1; response_revision:346; }","duration":"116.918724ms","start":"2022-09-21T22:09:12.401Z","end":"2022-09-21T22:09:12.518Z","steps":["trace[1861985295] 'agreement among raft nodes before linearized reading'  (duration: 116.853502ms)"],"step_count":1}
	{"level":"warn","ts":"2022-09-21T22:09:12.518Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"120.054969ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-nxpf5\" ","response":"range_response_count:1 size:4456"}
	{"level":"info","ts":"2022-09-21T22:09:12.518Z","caller":"traceutil/trace.go:171","msg":"trace[1034205839] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-nxpf5; range_end:; response_count:1; response_revision:346; }","duration":"120.090347ms","start":"2022-09-21T22:09:12.398Z","end":"2022-09-21T22:09:12.518Z","steps":["trace[1034205839] 'agreement among raft nodes before linearized reading'  (duration: 120.027308ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  22:13:14 up 55 min,  0 users,  load average: 2.78, 3.37, 2.63
	Linux no-preload-20220921220832-10174 5.15.0-1017-gcp #23~20.04.2-Ubuntu SMP Wed Aug 17 02:46:40 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kube-apiserver [b69529a7e224f2addaf76a8ffcfd12b9dfc7f85844417e58938c8a2a1dc26be0] <==
	* I0921 22:08:55.091280       1 controller.go:616] quota admission added evaluator for: namespaces
	I0921 22:08:55.162042       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0921 22:08:55.175656       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0921 22:08:55.175803       1 cache.go:39] Caches are synced for autoregister controller
	I0921 22:08:55.175922       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0921 22:08:55.176029       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0921 22:08:55.176068       1 apf_controller.go:305] Running API Priority and Fairness config worker
	I0921 22:08:55.176224       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0921 22:08:55.756064       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0921 22:08:55.965619       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0921 22:08:55.968635       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0921 22:08:55.968658       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0921 22:08:56.332396       1 controller.go:616] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0921 22:08:56.376266       1 controller.go:616] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0921 22:08:56.506135       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0921 22:08:56.511625       1 lease.go:250] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I0921 22:08:56.512520       1 controller.go:616] quota admission added evaluator for: endpoints
	I0921 22:08:56.515761       1 controller.go:616] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0921 22:08:57.004401       1 controller.go:616] quota admission added evaluator for: serviceaccounts
	I0921 22:08:57.961688       1 controller.go:616] quota admission added evaluator for: deployments.apps
	I0921 22:08:57.968176       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0921 22:08:57.975887       1 controller.go:616] quota admission added evaluator for: daemonsets.apps
	I0921 22:08:58.080449       1 controller.go:616] quota admission added evaluator for: leases.coordination.k8s.io
	I0921 22:09:11.440196       1 controller.go:616] quota admission added evaluator for: controllerrevisions.apps
	I0921 22:09:11.442903       1 controller.go:616] quota admission added evaluator for: replicasets.apps
	
	* 
	* ==> kube-controller-manager [b1a22ede66e3113827144deb16ed1bddbe2e6b60af54e01396e6daf924a4c409] <==
	* I0921 22:09:10.783384       1 taint_manager.go:204] "Starting NoExecuteTaintManager"
	W0921 22:09:10.783442       1 node_lifecycle_controller.go:1058] Missing timestamp for Node no-preload-20220921220832-10174. Assuming now as a timestamp.
	I0921 22:09:10.783454       1 taint_manager.go:209] "Sending events to api server"
	I0921 22:09:10.783502       1 event.go:294] "Event occurred" object="no-preload-20220921220832-10174" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node no-preload-20220921220832-10174 event: Registered Node no-preload-20220921220832-10174 in Controller"
	I0921 22:09:10.783512       1 node_lifecycle_controller.go:1209] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0921 22:09:10.827001       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I0921 22:09:10.840033       1 range_allocator.go:367] Set node no-preload-20220921220832-10174 PodCIDR to [10.244.0.0/24]
	I0921 22:09:10.840385       1 shared_informer.go:262] Caches are synced for resource quota
	I0921 22:09:10.865038       1 shared_informer.go:262] Caches are synced for resource quota
	I0921 22:09:10.880243       1 shared_informer.go:262] Caches are synced for crt configmap
	I0921 22:09:10.932197       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
	I0921 22:09:10.932227       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
	I0921 22:09:10.932229       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I0921 22:09:10.932439       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0921 22:09:10.932555       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I0921 22:09:11.261700       1 shared_informer.go:262] Caches are synced for garbage collector
	I0921 22:09:11.282960       1 shared_informer.go:262] Caches are synced for garbage collector
	I0921 22:09:11.282991       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0921 22:09:11.720390       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-565d847f94 to 2"
	I0921 22:09:11.808535       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-nxpf5"
	I0921 22:09:11.808562       1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-27cj5"
	I0921 22:09:12.391915       1 event.go:294] "Event occurred" object="kube-system/coredns-565d847f94" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-565d847f94-9864v"
	I0921 22:09:12.399398       1 event.go:294] "Event occurred" object="kube-system/coredns-565d847f94" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-565d847f94-m8xgt"
	I0921 22:09:12.734086       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-565d847f94 to 1 from 2"
	I0921 22:09:12.739609       1 event.go:294] "Event occurred" object="kube-system/coredns-565d847f94" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-565d847f94-9864v"
	
	* 
	* ==> kube-proxy [3eefbcb898b092a06a25d126ffaed982346f2b3691953b1cfaddc748fee80843] <==
	* I0921 22:09:13.034514       1 node.go:163] Successfully retrieved node IP: 192.168.94.2
	I0921 22:09:13.034595       1 server_others.go:138] "Detected node IP" address="192.168.94.2"
	I0921 22:09:13.034633       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0921 22:09:13.054326       1 server_others.go:206] "Using iptables Proxier"
	I0921 22:09:13.054377       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0921 22:09:13.054390       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0921 22:09:13.054418       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0921 22:09:13.054463       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0921 22:09:13.054692       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0921 22:09:13.055025       1 server.go:661] "Version info" version="v1.25.2"
	I0921 22:09:13.055049       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0921 22:09:13.055668       1 config.go:226] "Starting endpoint slice config controller"
	I0921 22:09:13.055697       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0921 22:09:13.055773       1 config.go:444] "Starting node config controller"
	I0921 22:09:13.055782       1 config.go:317] "Starting service config controller"
	I0921 22:09:13.055817       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0921 22:09:13.055807       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0921 22:09:13.156647       1 shared_informer.go:262] Caches are synced for node config
	I0921 22:09:13.156676       1 shared_informer.go:262] Caches are synced for service config
	I0921 22:09:13.156693       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [a9c3d39d9942f75aca084150dc32f191aed558a4eca281497df9885089a3cacb] <==
	* W0921 22:08:55.106405       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0921 22:08:55.106631       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0921 22:08:55.106455       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0921 22:08:55.106648       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0921 22:08:55.106452       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0921 22:08:55.106664       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0921 22:08:55.106822       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0921 22:08:55.106832       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0921 22:08:55.106813       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0921 22:08:55.106848       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0921 22:08:55.106851       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0921 22:08:55.106908       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0921 22:08:55.106964       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0921 22:08:55.106987       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0921 22:08:55.107358       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0921 22:08:55.107467       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0921 22:08:55.937169       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0921 22:08:55.937212       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0921 22:08:56.007668       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0921 22:08:56.007706       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0921 22:08:56.040851       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0921 22:08:56.040885       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0921 22:08:56.152475       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0921 22:08:56.152518       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0921 22:08:56.597365       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2022-09-21 22:08:33 UTC, end at Wed 2022-09-21 22:13:14 UTC. --
	Sep 21 22:11:18 no-preload-20220921220832-10174 kubelet[1740]: E0921 22:11:18.328924    1740 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:11:23 no-preload-20220921220832-10174 kubelet[1740]: E0921 22:11:23.330187    1740 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:11:28 no-preload-20220921220832-10174 kubelet[1740]: E0921 22:11:28.331074    1740 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:11:33 no-preload-20220921220832-10174 kubelet[1740]: E0921 22:11:33.331805    1740 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:11:38 no-preload-20220921220832-10174 kubelet[1740]: E0921 22:11:38.333431    1740 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:11:43 no-preload-20220921220832-10174 kubelet[1740]: E0921 22:11:43.334343    1740 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:11:48 no-preload-20220921220832-10174 kubelet[1740]: E0921 22:11:48.335111    1740 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:11:53 no-preload-20220921220832-10174 kubelet[1740]: E0921 22:11:53.336419    1740 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:11:57 no-preload-20220921220832-10174 kubelet[1740]: I0921 22:11:57.507323    1740 scope.go:115] "RemoveContainer" containerID="9b4bcc68b201c6e0c9847ec783771a1871359f04fea4ce921778c106f7361939"
	Sep 21 22:11:58 no-preload-20220921220832-10174 kubelet[1740]: E0921 22:11:58.337605    1740 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:12:03 no-preload-20220921220832-10174 kubelet[1740]: E0921 22:12:03.339415    1740 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:12:08 no-preload-20220921220832-10174 kubelet[1740]: E0921 22:12:08.340189    1740 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:12:13 no-preload-20220921220832-10174 kubelet[1740]: E0921 22:12:13.340880    1740 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:12:18 no-preload-20220921220832-10174 kubelet[1740]: E0921 22:12:18.342044    1740 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:12:23 no-preload-20220921220832-10174 kubelet[1740]: E0921 22:12:23.343245    1740 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:12:28 no-preload-20220921220832-10174 kubelet[1740]: E0921 22:12:28.344293    1740 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:12:33 no-preload-20220921220832-10174 kubelet[1740]: E0921 22:12:33.346007    1740 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:12:38 no-preload-20220921220832-10174 kubelet[1740]: E0921 22:12:38.347286    1740 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:12:43 no-preload-20220921220832-10174 kubelet[1740]: E0921 22:12:43.348421    1740 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:12:48 no-preload-20220921220832-10174 kubelet[1740]: E0921 22:12:48.349355    1740 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:12:53 no-preload-20220921220832-10174 kubelet[1740]: E0921 22:12:53.350060    1740 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:12:58 no-preload-20220921220832-10174 kubelet[1740]: E0921 22:12:58.350904    1740 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:13:03 no-preload-20220921220832-10174 kubelet[1740]: E0921 22:13:03.352201    1740 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:13:08 no-preload-20220921220832-10174 kubelet[1740]: E0921 22:13:08.353160    1740 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:13:13 no-preload-20220921220832-10174 kubelet[1740]: E0921 22:13:13.354704    1740 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20220921220832-10174 -n no-preload-20220921220832-10174
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-20220921220832-10174 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: coredns-565d847f94-m8xgt storage-provisioner
helpers_test.go:272: ======> post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context no-preload-20220921220832-10174 describe pod coredns-565d847f94-m8xgt storage-provisioner
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context no-preload-20220921220832-10174 describe pod coredns-565d847f94-m8xgt storage-provisioner: exit status 1 (58.571301ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-565d847f94-m8xgt" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context no-preload-20220921220832-10174 describe pod coredns-565d847f94-m8xgt storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (283.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (484.94s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-20220921220439-10174 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [e9f10a6a-fb42-454a-8573-8a278ba1bbdb] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: ***** TestStartStop/group/embed-certs/serial/DeployApp: pod "integration-test=busybox" failed to start within 8m0s: timed out waiting for the condition ****
start_stop_delete_test.go:196: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20220921220439-10174 -n embed-certs-20220921220439-10174
start_stop_delete_test.go:196: TestStartStop/group/embed-certs/serial/DeployApp: showing logs for failed pods as of 2022-09-21 22:17:18.087940992 +0000 UTC m=+3006.226393011
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-20220921220439-10174 describe po busybox -n default
start_stop_delete_test.go:196: (dbg) kubectl --context embed-certs-20220921220439-10174 describe po busybox -n default:
Name:             busybox
Namespace:        default
Priority:         0
Service Account:  default
Node:             <none>
Labels:           integration-test=busybox
Annotations:      <none>
Status:           Pending
IP:               
IPs:              <none>
Containers:
busybox:
Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
Port:       <none>
Host Port:  <none>
Command:
sleep
3600
Environment:  <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wf26r (ro)
Conditions:
Type           Status
PodScheduled   False 
Volumes:
kube-api-access-wf26r:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age                   From               Message
----     ------            ----                  ----               -------
Warning  FailedScheduling  2m45s (x2 over 8m1s)  default-scheduler  0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-20220921220439-10174 logs busybox -n default
start_stop_delete_test.go:196: (dbg) kubectl --context embed-certs-20220921220439-10174 logs busybox -n default:
start_stop_delete_test.go:196: wait: integration-test=busybox within 8m0s: timed out waiting for the condition
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220921220439-10174
helpers_test.go:235: (dbg) docker inspect embed-certs-20220921220439-10174:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0efc3a0310481f73b6cd48b12157774b33f818f56057de5f30c303eafdd6c31a",
	        "Created": "2022-09-21T22:04:47.451918435Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 229029,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-09-21T22:04:47.821915918Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f58fddaff4349397c9f51a6b73926a9b118af22b4ccb4492e84c74d0b59dcd4",
	        "ResolvConfPath": "/var/lib/docker/containers/0efc3a0310481f73b6cd48b12157774b33f818f56057de5f30c303eafdd6c31a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0efc3a0310481f73b6cd48b12157774b33f818f56057de5f30c303eafdd6c31a/hostname",
	        "HostsPath": "/var/lib/docker/containers/0efc3a0310481f73b6cd48b12157774b33f818f56057de5f30c303eafdd6c31a/hosts",
	        "LogPath": "/var/lib/docker/containers/0efc3a0310481f73b6cd48b12157774b33f818f56057de5f30c303eafdd6c31a/0efc3a0310481f73b6cd48b12157774b33f818f56057de5f30c303eafdd6c31a-json.log",
	        "Name": "/embed-certs-20220921220439-10174",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "embed-certs-20220921220439-10174:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-20220921220439-10174",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/4c8238508dd48885d3bcd481ab9754fae874a05a7cd3b8c92540515918045fea-init/diff:/var/lib/docker/overlay2/e464f9a636cb97832b8aa5685163afb26b14d037782877c3831e0f261b6fb12b/diff:/var/lib/docker/overlay2/6f2f230003f0711dfcff3390931e14f9abd39be1cd2b674079cb6a0fc99dab7f/diff:/var/lib/docker/overlay2/a539506727a1471a83b57694b6923a137699d9d47fe00601ffa27c34d63d17c2/diff:/var/lib/docker/overlay2/7dc48fda616494e9959c05191521a1f5bdf6be0555a35fc1d7ce85b87a0522ad/diff:/var/lib/docker/overlay2/6a1c06da85f8c03c9712693ca8a75d804e7b9f135a6b92e3929385cc791b0b3a/diff:/var/lib/docker/overlay2/d178ee6cf0c027c7943422dc71b38eb93ee131daede423f6cf283424eaebc517/diff:/var/lib/docker/overlay2/f4c8d81a89934a29db99d9ed4b4542a8882465ab2b6419ba4a3fe09214d25d9e/diff:/var/lib/docker/overlay2/4411575e37772974371f716b7c4abb3053138ddcaf53d883eb3116b4c3a37f78/diff:/var/lib/docker/overlay2/a3f8db0c0d790efcede2f41e470d293e280b022ff0bbb46c8bc25f190e3146fc/diff:/var/lib/docker/overlay2/a5bfd3
703378d6d5b2622598a13b4b2eb9203659c1ffff9aa93944ef8d20d22c/diff:/var/lib/docker/overlay2/3c24c9c8a37c1a488d12209df991b3857ccbc448a3329e46a8d48fc28ef81b83/diff:/var/lib/docker/overlay2/199771963b33bc809e912a6d428c556897f0ba04db2268e08695cb7ce0ee3bad/diff:/var/lib/docker/overlay2/4bc13570e456a5d261fbaf68140f2e5b2ea10c6683623858e16ee3ed1b117ef7/diff:/var/lib/docker/overlay2/9831fdfd44eff7e3f4780d10f5480f090b388735bb188e7a1e93251b7769d8f3/diff:/var/lib/docker/overlay2/28126fe31ddfbf99d4588c5ac750503b9b6cee8f2e00781941cff5f4323d1c76/diff:/var/lib/docker/overlay2/173221e8ff5f6ec22870429dc8bb01272ade638fe47275d1daa1ce07952c8c8c/diff:/var/lib/docker/overlay2/53bc2e4fa25c7c694765cf4d57e59552b26eed732ba59b8d74b221ea0ada7479/diff:/var/lib/docker/overlay2/0175a1cb7e1c92247b6cbac270da49d811d3508e193c1c2e47bef8cb769a47bf/diff:/var/lib/docker/overlay2/1651afeb98cf83af553f3e0a4dcb564af84cb440147702fddc0133cdae08e879/diff:/var/lib/docker/overlay2/64c0563be8f8902fe0e9d207f839be98da0e23d6d839403a7e5498f0468e6b75/diff:/var/lib/d
ocker/overlay2/c9ce1f7c22e681e0e279ce7656144b4c75e2d24f7297acad69e621f2756b75e8/diff:/var/lib/docker/overlay2/469e6b8bbc078383af3dc64c254906774ab56a833c4de1dd39b46bfa27fee3b0/diff:/var/lib/docker/overlay2/797536b923ada4261904843a51188063c401089eae5fec971f1dfb3c21d000a8/diff:/var/lib/docker/overlay2/9d5cbab97d75347795fc5814806362514e12ffc998b48411ecf1d23f980badf6/diff:/var/lib/docker/overlay2/8681f41e3df379cfdc8fd52b2e5837512b8e36a64c87fb159548a876d16e691d/diff:/var/lib/docker/overlay2/78ea1917a0aca968f65d796cbc2ee95d7d190a700496b9e47b29884fe13b1bec/diff:/var/lib/docker/overlay2/c3c231a74b7992f1fdc04b623c10d7791eab4fd97c803ed2610d595097c682c2/diff:/var/lib/docker/overlay2/f7b38a2a87d3958318f3a4b244e33d8096cd001a8fcb916eec022b0679382900/diff:/var/lib/docker/overlay2/1107be3397c1dfc460decaf307982d427cc431beb7938fb0e78f4f7cbc0afd3b/diff:/var/lib/docker/overlay2/59319d0c4cc381deff6e51ba1cc718b7cfd4fc557e74b8e83bd228afca501c8c/diff:/var/lib/docker/overlay2/9d031408a1ab9825246b7bba81db2596cb92e70fbedc70fba9be1d25320
8529f/diff:/var/lib/docker/overlay2/5800b717c814431baf3cc17520b099e7e011915cdeb46ed6360ec2f831a926ec/diff:/var/lib/docker/overlay2/6660d8dfc6dbb7f41f871ff7ec7e0fdfdb2ca3480141cdd4cce92869cb5382bf/diff:/var/lib/docker/overlay2/525a566b79110ff18e78880f2652b8d9cfcfbdbf3272fedb650933fcbbf869bd/diff:/var/lib/docker/overlay2/a0730aa8e0952fc661048d3a7f986ff246a5b2a29ad4e8195970bb407e3cecfc/diff:/var/lib/docker/overlay2/f9d25765ae914f5f013b5f46292f03c39d82f675d2102904f5fabecf07189337/diff:/var/lib/docker/overlay2/b7b34d126964ff60bc0fb625da7343e53e8bb4e77d5e72e33257c28b3d395ca3/diff:/var/lib/docker/overlay2/8514d6cc3775ebdd86f683a78503419bc97346c0059ac0d48563d2edec9727f0/diff:/var/lib/docker/overlay2/dd6f7bead5718d42040b4ee90ce09fa15720d04f3c58e2964b1d5b241bf5b7ed/diff:/var/lib/docker/overlay2/1a8adeb0355a42e9284e241b527930f4f6b7108d459c10550d5d0c4451f9e924/diff:/var/lib/docker/overlay2/703991449e0070d93945e4435da56144ce54e461e877e0b084144663d46b10f6/diff:/var/lib/docker/overlay2/65855711cfa445d374537e6268fd1bea2f62ea
bf2f590842933254b4755050dd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4c8238508dd48885d3bcd481ab9754fae874a05a7cd3b8c92540515918045fea/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4c8238508dd48885d3bcd481ab9754fae874a05a7cd3b8c92540515918045fea/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4c8238508dd48885d3bcd481ab9754fae874a05a7cd3b8c92540515918045fea/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-20220921220439-10174",
	                "Source": "/var/lib/docker/volumes/embed-certs-20220921220439-10174/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-20220921220439-10174",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-20220921220439-10174",
	                "name.minikube.sigs.k8s.io": "embed-certs-20220921220439-10174",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9eafa65cab570427f54e672c314a2de414b922ec2d5c452fa77eb94dc7c53c9e",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49398"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49397"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49394"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49396"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49395"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/9eafa65cab57",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-20220921220439-10174": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "0efc3a031048",
	                        "embed-certs-20220921220439-10174"
	                    ],
	                    "NetworkID": "e71aa30fd3ace87130e43e4abce1f2566d43d95c3b2e37ab1594e3c5a105c1bc",
	                    "EndpointID": "e12f2a7ae893a2d247b22ed045ec225e1db5924afdba9eb642a202517e80b83a",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220921220439-10174 -n embed-certs-20220921220439-10174
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-20220921220439-10174 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-20220921220439-10174 logs -n 25: (1.03697434s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/DeployApp logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |                     Profile                     |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p                                                | kindnet-20220921215523-10174                    | jenkins | v1.27.0 | 21 Sep 22 21:59 UTC | 21 Sep 22 21:59 UTC |
	|         | kindnet-20220921215523-10174                      |                                                 |         |         |                     |                     |
	|         | pgrep -a kubelet                                  |                                                 |         |         |                     |                     |
	| delete  | -p                                                | kindnet-20220921215523-10174                    | jenkins | v1.27.0 | 21 Sep 22 21:59 UTC | 21 Sep 22 21:59 UTC |
	|         | kindnet-20220921215523-10174                      |                                                 |         |         |                     |                     |
	| start   | -p                                                | enable-default-cni-20220921215523-10174         | jenkins | v1.27.0 | 21 Sep 22 21:59 UTC | 21 Sep 22 22:04 UTC |
	|         | enable-default-cni-20220921215523-10174           |                                                 |         |         |                     |                     |
	|         | --memory=2048 --alsologtostderr                   |                                                 |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                                                 |         |         |                     |                     |
	|         | --enable-default-cni=true                         |                                                 |         |         |                     |                     |
	|         | --driver=docker                                   |                                                 |         |         |                     |                     |
	|         | --container-runtime=containerd                    |                                                 |         |         |                     |                     |
	| ssh     | -p cilium-20220921215524-10174                    | cilium-20220921215524-10174                     | jenkins | v1.27.0 | 21 Sep 22 22:01 UTC | 21 Sep 22 22:01 UTC |
	|         | pgrep -a kubelet                                  |                                                 |         |         |                     |                     |
	| delete  | -p cilium-20220921215524-10174                    | cilium-20220921215524-10174                     | jenkins | v1.27.0 | 21 Sep 22 22:01 UTC | 21 Sep 22 22:01 UTC |
	| start   | -p bridge-20220921215523-10174                    | bridge-20220921215523-10174                     | jenkins | v1.27.0 | 21 Sep 22 22:01 UTC | 21 Sep 22 22:01 UTC |
	|         | --memory=2048                                     |                                                 |         |         |                     |                     |
	|         | --alsologtostderr                                 |                                                 |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                                                 |         |         |                     |                     |
	|         | --cni=bridge --driver=docker                      |                                                 |         |         |                     |                     |
	|         | --container-runtime=containerd                    |                                                 |         |         |                     |                     |
	| ssh     | -p bridge-20220921215523-10174                    | bridge-20220921215523-10174                     | jenkins | v1.27.0 | 21 Sep 22 22:01 UTC | 21 Sep 22 22:01 UTC |
	|         | pgrep -a kubelet                                  |                                                 |         |         |                     |                     |
	| delete  | -p                                                | kubernetes-upgrade-20220921215522-10174         | jenkins | v1.27.0 | 21 Sep 22 22:04 UTC | 21 Sep 22 22:04 UTC |
	|         | kubernetes-upgrade-20220921215522-10174           |                                                 |         |         |                     |                     |
	| start   | -p                                                | embed-certs-20220921220439-10174                | jenkins | v1.27.0 | 21 Sep 22 22:04 UTC |                     |
	|         | embed-certs-20220921220439-10174                  |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |         |                     |                     |
	|         | --wait=true --embed-certs                         |                                                 |         |         |                     |                     |
	|         | --driver=docker                                   |                                                 |         |         |                     |                     |
	|         | --container-runtime=containerd                    |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.2                      |                                                 |         |         |                     |                     |
	| ssh     | -p                                                | enable-default-cni-20220921215523-10174         | jenkins | v1.27.0 | 21 Sep 22 22:04 UTC | 21 Sep 22 22:04 UTC |
	|         | enable-default-cni-20220921215523-10174           |                                                 |         |         |                     |                     |
	|         | pgrep -a kubelet                                  |                                                 |         |         |                     |                     |
	| delete  | -p bridge-20220921215523-10174                    | bridge-20220921215523-10174                     | jenkins | v1.27.0 | 21 Sep 22 22:07 UTC | 21 Sep 22 22:07 UTC |
	| start   | -p                                                | old-k8s-version-20220921220722-10174            | jenkins | v1.27.0 | 21 Sep 22 22:07 UTC | 21 Sep 22 22:09 UTC |
	|         | old-k8s-version-20220921220722-10174              |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                 |                                                 |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                                                 |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                                                 |         |         |                     |                     |
	|         | --keep-context=false --driver=docker              |                                                 |         |         |                     |                     |
	|         |  --container-runtime=containerd                   |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                      |                                                 |         |         |                     |                     |
	| delete  | -p calico-20220921215524-10174                    | calico-20220921215524-10174                     | jenkins | v1.27.0 | 21 Sep 22 22:08 UTC | 21 Sep 22 22:08 UTC |
	| delete  | -p                                                | disable-driver-mounts-20220921220831-10174      | jenkins | v1.27.0 | 21 Sep 22 22:08 UTC | 21 Sep 22 22:08 UTC |
	|         | disable-driver-mounts-20220921220831-10174        |                                                 |         |         |                     |                     |
	| start   | -p                                                | no-preload-20220921220832-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:08 UTC |                     |
	|         | no-preload-20220921220832-10174                   |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                                                 |         |         |                     |                     |
	|         | --driver=docker                                   |                                                 |         |         |                     |                     |
	|         | --container-runtime=containerd                    |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.2                      |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | old-k8s-version-20220921220722-10174            | jenkins | v1.27.0 | 21 Sep 22 22:09 UTC | 21 Sep 22 22:09 UTC |
	|         | old-k8s-version-20220921220722-10174              |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                                 |         |         |                     |                     |
	| stop    | -p                                                | old-k8s-version-20220921220722-10174            | jenkins | v1.27.0 | 21 Sep 22 22:09 UTC | 21 Sep 22 22:09 UTC |
	|         | old-k8s-version-20220921220722-10174              |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                               | old-k8s-version-20220921220722-10174            | jenkins | v1.27.0 | 21 Sep 22 22:09 UTC | 21 Sep 22 22:09 UTC |
	|         | old-k8s-version-20220921220722-10174              |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                 |         |         |                     |                     |
	| start   | -p                                                | old-k8s-version-20220921220722-10174            | jenkins | v1.27.0 | 21 Sep 22 22:09 UTC | 21 Sep 22 22:17 UTC |
	|         | old-k8s-version-20220921220722-10174              |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                 |                                                 |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                                                 |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                                                 |         |         |                     |                     |
	|         | --keep-context=false --driver=docker              |                                                 |         |         |                     |                     |
	|         |  --container-runtime=containerd                   |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                      |                                                 |         |         |                     |                     |
	| delete  | -p                                                | enable-default-cni-20220921215523-10174         | jenkins | v1.27.0 | 21 Sep 22 22:11 UTC | 21 Sep 22 22:11 UTC |
	|         | enable-default-cni-20220921215523-10174           |                                                 |         |         |                     |                     |
	| start   | -p                                                | default-k8s-different-port-20220921221118-10174 | jenkins | v1.27.0 | 21 Sep 22 22:11 UTC |                     |
	|         | default-k8s-different-port-20220921221118-10174   |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                 |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker             |                                                 |         |         |                     |                     |
	|         |  --container-runtime=containerd                   |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.2                      |                                                 |         |         |                     |                     |
	| ssh     | -p                                                | old-k8s-version-20220921220722-10174            | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | old-k8s-version-20220921220722-10174              |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                        |                                                 |         |         |                     |                     |
	| pause   | -p                                                | old-k8s-version-20220921220722-10174            | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | old-k8s-version-20220921220722-10174              |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                 |         |         |                     |                     |
	| unpause | -p                                                | old-k8s-version-20220921220722-10174            | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | old-k8s-version-20220921220722-10174              |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                 |         |         |                     |                     |
	| delete  | -p                                                | old-k8s-version-20220921220722-10174            | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC |                     |
	|         | old-k8s-version-20220921220722-10174              |                                                 |         |         |                     |                     |
	|---------|---------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/09/21 22:11:18
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.19.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0921 22:11:18.087901  251080 out.go:296] Setting OutFile to fd 1 ...
	I0921 22:11:18.088024  251080 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:11:18.088036  251080 out.go:309] Setting ErrFile to fd 2...
	I0921 22:11:18.088042  251080 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:11:18.088174  251080 root.go:333] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/bin
	I0921 22:11:18.088746  251080 out.go:303] Setting JSON to false
	I0921 22:11:18.090393  251080 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3229,"bootTime":1663795049,"procs":653,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1017-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0921 22:11:18.090456  251080 start.go:125] virtualization: kvm guest
	I0921 22:11:18.093408  251080 out.go:177] * [default-k8s-different-port-20220921221118-10174] minikube v1.27.0 on Ubuntu 20.04 (kvm/amd64)
	I0921 22:11:18.094844  251080 notify.go:214] Checking for updates...
	I0921 22:11:18.096337  251080 out.go:177]   - MINIKUBE_LOCATION=14995
	I0921 22:11:18.097775  251080 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0921 22:11:18.099219  251080 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
	I0921 22:11:18.100740  251080 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube
	I0921 22:11:18.102389  251080 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0921 22:11:18.104495  251080 config.go:180] Loaded profile config "embed-certs-20220921220439-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.2
	I0921 22:11:18.104651  251080 config.go:180] Loaded profile config "no-preload-20220921220832-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.2
	I0921 22:11:18.104807  251080 config.go:180] Loaded profile config "old-k8s-version-20220921220722-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0921 22:11:18.104881  251080 driver.go:365] Setting default libvirt URI to qemu:///system
	I0921 22:11:18.138312  251080 docker.go:137] docker version: linux-20.10.18
	I0921 22:11:18.138426  251080 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:11:18.232188  251080 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:true NGoroutines:57 SystemTime:2022-09-21 22:11:18.15986917 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1017-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.18 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientI
nfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0921 22:11:18.232324  251080 docker.go:254] overlay module found
	I0921 22:11:18.234351  251080 out.go:177] * Using the docker driver based on user configuration
	I0921 22:11:18.235767  251080 start.go:284] selected driver: docker
	I0921 22:11:18.235790  251080 start.go:808] validating driver "docker" against <nil>
	I0921 22:11:18.235809  251080 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0921 22:11:18.236643  251080 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:11:18.330559  251080 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:true NGoroutines:57 SystemTime:2022-09-21 22:11:18.257769036 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1017-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.18 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0921 22:11:18.330687  251080 start_flags.go:302] no existing cluster config was found, will generate one from the flags 
	I0921 22:11:18.330876  251080 start_flags.go:867] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0921 22:11:18.332978  251080 out.go:177] * Using Docker driver with root privileges
	I0921 22:11:18.334347  251080 cni.go:95] Creating CNI manager for ""
	I0921 22:11:18.334364  251080 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0921 22:11:18.334381  251080 start_flags.go:311] Found "CNI" CNI - setting NetworkPlugin=cni
	I0921 22:11:18.334405  251080 start_flags.go:316] config:
	{Name:default-k8s-different-port-20220921221118-10174 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:default-k8s-different-port-20220921221118-10174 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDo
main:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/soc
ket_vmnet}
	I0921 22:11:18.336049  251080 out.go:177] * Starting control plane node default-k8s-different-port-20220921221118-10174 in cluster default-k8s-different-port-20220921221118-10174
	I0921 22:11:18.337335  251080 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0921 22:11:18.338625  251080 out.go:177] * Pulling base image ...
	I0921 22:11:18.339915  251080 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime containerd
	I0921 22:11:18.339961  251080 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.2-containerd-overlay2-amd64.tar.lz4
	I0921 22:11:18.339976  251080 cache.go:57] Caching tarball of preloaded images
	I0921 22:11:18.340010  251080 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	I0921 22:11:18.340234  251080 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0921 22:11:18.340259  251080 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.2 on containerd
	I0921 22:11:18.340397  251080 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/config.json ...
	I0921 22:11:18.340430  251080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/config.json: {Name:mk68817f4bf887721f92775083cbcee80d5fb68a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:11:18.367818  251080 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon, skipping pull
	I0921 22:11:18.367843  251080 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c exists in daemon, skipping load
	I0921 22:11:18.367856  251080 cache.go:208] Successfully downloaded all kic artifacts
	I0921 22:11:18.367892  251080 start.go:364] acquiring machines lock for default-k8s-different-port-20220921221118-10174: {Name:mk6a2906d520bc1db61074ef435cf249d094e940 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:11:18.368018  251080 start.go:368] acquired machines lock for "default-k8s-different-port-20220921221118-10174" in 101.344µs
	I0921 22:11:18.368055  251080 start.go:93] Provisioning new machine with config: &{Name:default-k8s-different-port-20220921221118-10174 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:default-k8s-different-port-20220921221118-10174
Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8444 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0921 22:11:18.368157  251080 start.go:125] createHost starting for "" (driver="docker")
	I0921 22:11:17.425811  247121 kubeadm.go:778] kubelet initialised
	I0921 22:11:17.425835  247121 kubeadm.go:779] duration metric: took 58.431682599s waiting for restarted kubelet to initialise ...
	I0921 22:11:17.425842  247121 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0921 22:11:17.430135  247121 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-ft4dg" in "kube-system" namespace to be "Ready" ...
	I0921 22:11:17.434236  247121 pod_ready.go:92] pod "coredns-5644d7b6d9-ft4dg" in "kube-system" namespace has status "Ready":"True"
	I0921 22:11:17.434255  247121 pod_ready.go:81] duration metric: took 4.0995ms waiting for pod "coredns-5644d7b6d9-ft4dg" in "kube-system" namespace to be "Ready" ...
	I0921 22:11:17.434264  247121 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-mvb9z" in "kube-system" namespace to be "Ready" ...
	I0921 22:11:17.438049  247121 pod_ready.go:92] pod "coredns-5644d7b6d9-mvb9z" in "kube-system" namespace has status "Ready":"True"
	I0921 22:11:17.438070  247121 pod_ready.go:81] duration metric: took 3.799088ms waiting for pod "coredns-5644d7b6d9-mvb9z" in "kube-system" namespace to be "Ready" ...
	I0921 22:11:17.438084  247121 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-20220921220722-10174" in "kube-system" namespace to be "Ready" ...
	I0921 22:11:17.441889  247121 pod_ready.go:92] pod "etcd-old-k8s-version-20220921220722-10174" in "kube-system" namespace has status "Ready":"True"
	I0921 22:11:17.441906  247121 pod_ready.go:81] duration metric: took 3.813836ms waiting for pod "etcd-old-k8s-version-20220921220722-10174" in "kube-system" namespace to be "Ready" ...
	I0921 22:11:17.441918  247121 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-20220921220722-10174" in "kube-system" namespace to be "Ready" ...
	I0921 22:11:17.445604  247121 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-20220921220722-10174" in "kube-system" namespace has status "Ready":"True"
	I0921 22:11:17.445626  247121 pod_ready.go:81] duration metric: took 3.699251ms waiting for pod "kube-apiserver-old-k8s-version-20220921220722-10174" in "kube-system" namespace to be "Ready" ...
	I0921 22:11:17.445637  247121 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-20220921220722-10174" in "kube-system" namespace to be "Ready" ...
	I0921 22:11:17.825354  247121 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-20220921220722-10174" in "kube-system" namespace has status "Ready":"True"
	I0921 22:11:17.825379  247121 pod_ready.go:81] duration metric: took 379.733387ms waiting for pod "kube-controller-manager-old-k8s-version-20220921220722-10174" in "kube-system" namespace to be "Ready" ...
	I0921 22:11:17.825389  247121 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-fxg44" in "kube-system" namespace to be "Ready" ...
	I0921 22:11:18.225967  247121 pod_ready.go:92] pod "kube-proxy-fxg44" in "kube-system" namespace has status "Ready":"True"
	I0921 22:11:18.225996  247121 pod_ready.go:81] duration metric: took 400.60033ms waiting for pod "kube-proxy-fxg44" in "kube-system" namespace to be "Ready" ...
	I0921 22:11:18.226010  247121 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-20220921220722-10174" in "kube-system" namespace to be "Ready" ...
	I0921 22:11:18.625047  247121 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-20220921220722-10174" in "kube-system" namespace has status "Ready":"True"
	I0921 22:11:18.625076  247121 pod_ready.go:81] duration metric: took 399.057463ms waiting for pod "kube-scheduler-old-k8s-version-20220921220722-10174" in "kube-system" namespace to be "Ready" ...
	I0921 22:11:18.625094  247121 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace to be "Ready" ...
	I0921 22:11:18.837224  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:11:21.337131  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:11:18.370528  251080 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0921 22:11:18.370720  251080 start.go:159] libmachine.API.Create for "default-k8s-different-port-20220921221118-10174" (driver="docker")
	I0921 22:11:18.370744  251080 client.go:168] LocalClient.Create starting
	I0921 22:11:18.370817  251080 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem
	I0921 22:11:18.370845  251080 main.go:134] libmachine: Decoding PEM data...
	I0921 22:11:18.370861  251080 main.go:134] libmachine: Parsing certificate...
	I0921 22:11:18.370925  251080 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/cert.pem
	I0921 22:11:18.370944  251080 main.go:134] libmachine: Decoding PEM data...
	I0921 22:11:18.370953  251080 main.go:134] libmachine: Parsing certificate...
	I0921 22:11:18.371236  251080 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220921221118-10174 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:11:18.395515  251080 cli_runner.go:211] docker network inspect default-k8s-different-port-20220921221118-10174 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:11:18.395579  251080 network_create.go:272] running [docker network inspect default-k8s-different-port-20220921221118-10174] to gather additional debugging logs...
	I0921 22:11:18.395600  251080 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220921221118-10174
	W0921 22:11:18.419547  251080 cli_runner.go:211] docker network inspect default-k8s-different-port-20220921221118-10174 returned with exit code 1
	I0921 22:11:18.419579  251080 network_create.go:275] error running [docker network inspect default-k8s-different-port-20220921221118-10174]: docker network inspect default-k8s-different-port-20220921221118-10174: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: default-k8s-different-port-20220921221118-10174
	I0921 22:11:18.419591  251080 network_create.go:277] output of [docker network inspect default-k8s-different-port-20220921221118-10174]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: default-k8s-different-port-20220921221118-10174
	
	** /stderr **
	I0921 22:11:18.419643  251080 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 22:11:18.444258  251080 network.go:241] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-b7c23e57d062 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:a3:39:9d:03}}
	I0921 22:11:18.445274  251080 network.go:241] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-bfa8cb3d5f9b IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:8c:39:36:0c}}
	I0921 22:11:18.446196  251080 network.go:241] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName:br-e71aa30fd3ac IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:7a:b1:c8:c1}}
	I0921 22:11:18.447244  251080 network.go:241] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName:br-4f93bc2f061a IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:ca:b2:42:ce}}
	I0921 22:11:18.448755  251080 network.go:290] reserving subnet 192.168.85.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.85.0:0xc00012cb10] misses:0}
	I0921 22:11:18.448802  251080 network.go:236] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 22:11:18.448826  251080 network_create.go:115] attempt to create docker network default-k8s-different-port-20220921221118-10174 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0921 22:11:18.448915  251080 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-different-port-20220921221118-10174 default-k8s-different-port-20220921221118-10174
	I0921 22:11:18.510820  251080 network_create.go:99] docker network default-k8s-different-port-20220921221118-10174 192.168.85.0/24 created
	I0921 22:11:18.510857  251080 kic.go:106] calculated static IP "192.168.85.2" for the "default-k8s-different-port-20220921221118-10174" container
	I0921 22:11:18.510919  251080 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0921 22:11:18.536329  251080 cli_runner.go:164] Run: docker volume create default-k8s-different-port-20220921221118-10174 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220921221118-10174 --label created_by.minikube.sigs.k8s.io=true
	I0921 22:11:18.561443  251080 oci.go:103] Successfully created a docker volume default-k8s-different-port-20220921221118-10174
	I0921 22:11:18.561538  251080 cli_runner.go:164] Run: docker run --rm --name default-k8s-different-port-20220921221118-10174-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220921221118-10174 --entrypoint /usr/bin/test -v default-k8s-different-port-20220921221118-10174:/var gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c -d /var/lib
	I0921 22:11:19.127923  251080 oci.go:107] Successfully prepared a docker volume default-k8s-different-port-20220921221118-10174
	I0921 22:11:19.127974  251080 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime containerd
	I0921 22:11:19.127994  251080 kic.go:179] Starting extracting preloaded images to volume ...
	I0921 22:11:19.128049  251080 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.2-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-different-port-20220921221118-10174:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c -I lz4 -xf /preloaded.tar -C /extractDir
	I0921 22:11:21.030814  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:11:23.030888  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:11:23.836773  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:11:25.837620  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:11:25.638147  251080 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.2-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-different-port-20220921221118-10174:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c -I lz4 -xf /preloaded.tar -C /extractDir: (6.510027893s)
	I0921 22:11:25.638182  251080 kic.go:188] duration metric: took 6.510186 seconds to extract preloaded images to volume
	W0921 22:11:25.638326  251080 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0921 22:11:25.638433  251080 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0921 22:11:25.732843  251080 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-different-port-20220921221118-10174 --name default-k8s-different-port-20220921221118-10174 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220921221118-10174 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-different-port-20220921221118-10174 --network default-k8s-different-port-20220921221118-10174 --ip 192.168.85.2 --volume default-k8s-different-port-20220921221118-10174:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c
	I0921 22:11:26.149451  251080 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221118-10174 --format={{.State.Running}}
	I0921 22:11:26.176098  251080 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221118-10174 --format={{.State.Status}}
	I0921 22:11:26.201313  251080 cli_runner.go:164] Run: docker exec default-k8s-different-port-20220921221118-10174 stat /var/lib/dpkg/alternatives/iptables
	I0921 22:11:26.261131  251080 oci.go:144] the created container "default-k8s-different-port-20220921221118-10174" has a running status.
	I0921 22:11:26.261169  251080 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa...
	I0921 22:11:26.437655  251080 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0921 22:11:26.519667  251080 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221118-10174 --format={{.State.Status}}
	I0921 22:11:26.549062  251080 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0921 22:11:26.549102  251080 kic_runner.go:114] Args: [docker exec --privileged default-k8s-different-port-20220921221118-10174 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0921 22:11:26.638792  251080 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221118-10174 --format={{.State.Status}}
	I0921 22:11:26.669847  251080 machine.go:88] provisioning docker machine ...
	I0921 22:11:26.669895  251080 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20220921221118-10174"
	I0921 22:11:26.669965  251080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:11:26.697039  251080 main.go:134] libmachine: Using SSH client type: native
	I0921 22:11:26.697198  251080 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ec520] 0x7ef6a0 <nil>  [] 0s} 127.0.0.1 49418 <nil> <nil>}
	I0921 22:11:26.697217  251080 main.go:134] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20220921221118-10174 && echo "default-k8s-different-port-20220921221118-10174" | sudo tee /etc/hostname
	I0921 22:11:26.837603  251080 main.go:134] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20220921221118-10174
	
	I0921 22:11:26.837685  251080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:11:26.861819  251080 main.go:134] libmachine: Using SSH client type: native
	I0921 22:11:26.861990  251080 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ec520] 0x7ef6a0 <nil>  [] 0s} 127.0.0.1 49418 <nil> <nil>}
	I0921 22:11:26.862027  251080 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20220921221118-10174' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20220921221118-10174/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20220921221118-10174' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0921 22:11:26.991431  251080 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0921 22:11:26.991457  251080 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube}
	I0921 22:11:26.991475  251080 ubuntu.go:177] setting up certificates
	I0921 22:11:26.991485  251080 provision.go:83] configureAuth start
	I0921 22:11:26.991540  251080 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220921221118-10174
	I0921 22:11:27.016270  251080 provision.go:138] copyHostCerts
	I0921 22:11:27.016322  251080 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cert.pem, removing ...
	I0921 22:11:27.016333  251080 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cert.pem
	I0921 22:11:27.016404  251080 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cert.pem (1123 bytes)
	I0921 22:11:27.016484  251080 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/key.pem, removing ...
	I0921 22:11:27.016495  251080 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/key.pem
	I0921 22:11:27.016521  251080 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/key.pem (1675 bytes)
	I0921 22:11:27.016571  251080 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.pem, removing ...
	I0921 22:11:27.016579  251080 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.pem
	I0921 22:11:27.016602  251080 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.pem (1078 bytes)
	I0921 22:11:27.016655  251080 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20220921221118-10174 san=[192.168.85.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20220921221118-10174]
	I0921 22:11:27.144451  251080 provision.go:172] copyRemoteCerts
	I0921 22:11:27.144512  251080 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0921 22:11:27.144545  251080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:11:27.170137  251080 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49418 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa Username:docker}
	I0921 22:11:27.266755  251080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0921 22:11:27.283950  251080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server.pem --> /etc/docker/server.pem (1310 bytes)
	I0921 22:11:27.300984  251080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0921 22:11:27.317480  251080 provision.go:86] duration metric: configureAuth took 325.986117ms
	I0921 22:11:27.317504  251080 ubuntu.go:193] setting minikube options for container-runtime
	I0921 22:11:27.317672  251080 config.go:180] Loaded profile config "default-k8s-different-port-20220921221118-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.2
	I0921 22:11:27.317689  251080 machine.go:91] provisioned docker machine in 647.81218ms
	I0921 22:11:27.317695  251080 client.go:171] LocalClient.Create took 8.9469458s
	I0921 22:11:27.317730  251080 start.go:167] duration metric: libmachine.API.Create for "default-k8s-different-port-20220921221118-10174" took 8.947008533s
	I0921 22:11:27.317744  251080 start.go:300] post-start starting for "default-k8s-different-port-20220921221118-10174" (driver="docker")
	I0921 22:11:27.317749  251080 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0921 22:11:27.317788  251080 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0921 22:11:27.317835  251080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:11:27.343342  251080 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49418 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa Username:docker}
	I0921 22:11:27.435531  251080 ssh_runner.go:195] Run: cat /etc/os-release
	I0921 22:11:27.438295  251080 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0921 22:11:27.438325  251080 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0921 22:11:27.438342  251080 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0921 22:11:27.438356  251080 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0921 22:11:27.438371  251080 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/addons for local assets ...
	I0921 22:11:27.438424  251080 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files for local assets ...
	I0921 22:11:27.438521  251080 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/101742.pem -> 101742.pem in /etc/ssl/certs
	I0921 22:11:27.438630  251080 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0921 22:11:27.445223  251080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/101742.pem --> /etc/ssl/certs/101742.pem (1708 bytes)
	I0921 22:11:27.462414  251080 start.go:303] post-start completed in 144.661014ms
	I0921 22:11:27.462741  251080 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220921221118-10174
	I0921 22:11:27.489387  251080 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/config.json ...
	I0921 22:11:27.489723  251080 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:11:27.489786  251080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:11:27.514068  251080 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49418 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa Username:docker}
	I0921 22:11:27.604197  251080 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:11:27.608399  251080 start.go:128] duration metric: createHost completed in 9.240229808s
	I0921 22:11:27.608420  251080 start.go:83] releasing machines lock for "default-k8s-different-port-20220921221118-10174", held for 9.240389159s
	I0921 22:11:27.608527  251080 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220921221118-10174
	I0921 22:11:27.634524  251080 ssh_runner.go:195] Run: systemctl --version
	I0921 22:11:27.634570  251080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:11:27.634600  251080 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0921 22:11:27.634691  251080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:11:27.660182  251080 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49418 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa Username:docker}
	I0921 22:11:27.660873  251080 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49418 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa Username:docker}
	I0921 22:11:27.749037  251080 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0921 22:11:27.781889  251080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0921 22:11:27.791675  251080 docker.go:188] disabling docker service ...
	I0921 22:11:27.791773  251080 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0921 22:11:27.809646  251080 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0921 22:11:27.818739  251080 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0921 22:11:27.897618  251080 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0921 22:11:27.972484  251080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0921 22:11:27.982099  251080 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0921 22:11:27.995156  251080 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "registry.k8s.io/pause:3.8"|' -i /etc/containerd/config.toml"
	I0921 22:11:28.003109  251080 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0921 22:11:28.011124  251080 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0921 22:11:28.018761  251080 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0921 22:11:28.026807  251080 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0921 22:11:28.034371  251080 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0921 22:11:28.041097  251080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0921 22:11:28.122123  251080 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0921 22:11:28.202854  251080 start.go:450] Will wait 60s for socket path /run/containerd/containerd.sock
	I0921 22:11:28.202928  251080 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0921 22:11:28.206617  251080 start.go:471] Will wait 60s for crictl version
	I0921 22:11:28.206695  251080 ssh_runner.go:195] Run: sudo crictl version
	I0921 22:11:28.234745  251080 start.go:480] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.8
	RuntimeApiVersion:  v1alpha2
	I0921 22:11:28.234815  251080 ssh_runner.go:195] Run: containerd --version
	I0921 22:11:28.263806  251080 ssh_runner.go:195] Run: containerd --version
	I0921 22:11:28.295305  251080 out.go:177] * Preparing Kubernetes v1.25.2 on containerd 1.6.8 ...
	I0921 22:11:28.296662  251080 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220921221118-10174 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 22:11:28.320125  251080 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0921 22:11:28.323370  251080 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0921 22:11:28.333100  251080 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime containerd
	I0921 22:11:28.333171  251080 ssh_runner.go:195] Run: sudo crictl images --output json
	I0921 22:11:28.357788  251080 containerd.go:553] all images are preloaded for containerd runtime.
	I0921 22:11:28.357819  251080 containerd.go:467] Images already preloaded, skipping extraction
	I0921 22:11:28.357874  251080 ssh_runner.go:195] Run: sudo crictl images --output json
	I0921 22:11:28.381874  251080 containerd.go:553] all images are preloaded for containerd runtime.
	I0921 22:11:28.381894  251080 cache_images.go:84] Images are preloaded, skipping loading
	I0921 22:11:28.381937  251080 ssh_runner.go:195] Run: sudo crictl info
	I0921 22:11:28.408427  251080 cni.go:95] Creating CNI manager for ""
	I0921 22:11:28.408456  251080 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0921 22:11:28.408470  251080 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0921 22:11:28.408481  251080 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.25.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20220921221118-10174 NodeName:default-k8s-different-port-20220921221118-10174 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgr
oupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
	I0921 22:11:28.408605  251080 kubeadm.go:161] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "default-k8s-different-port-20220921221118-10174"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0921 22:11:28.408684  251080 kubeadm.go:962] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=default-k8s-different-port-20220921221118-10174 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.2 ClusterName:default-k8s-different-port-20220921221118-10174 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0921 22:11:28.408742  251080 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.2
	I0921 22:11:28.416363  251080 binaries.go:44] Found k8s binaries, skipping transfer
	I0921 22:11:28.416431  251080 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0921 22:11:28.423279  251080 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (540 bytes)
	I0921 22:11:28.435844  251080 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0921 22:11:28.448554  251080 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2076 bytes)
	I0921 22:11:28.461624  251080 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0921 22:11:28.464712  251080 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0921 22:11:28.474003  251080 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174 for IP: 192.168.85.2
	I0921 22:11:28.474126  251080 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.key
	I0921 22:11:28.474185  251080 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/proxy-client-ca.key
	I0921 22:11:28.474246  251080 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/client.key
	I0921 22:11:28.474266  251080 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/client.crt with IP's: []
	I0921 22:11:28.567465  251080 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/client.crt ...
	I0921 22:11:28.567491  251080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/client.crt: {Name:mk7f007abc18238b3f4d498b44323ac1c9a08dd4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:11:28.567699  251080 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/client.key ...
	I0921 22:11:28.567732  251080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/client.key: {Name:mk573406c706742430a89f6f7a356628c72d9a49 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:11:28.567860  251080 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/apiserver.key.43b9df8c
	I0921 22:11:28.567875  251080 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/apiserver.crt.43b9df8c with IP's: [192.168.85.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0921 22:11:28.821872  251080 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/apiserver.crt.43b9df8c ...
	I0921 22:11:28.821903  251080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/apiserver.crt.43b9df8c: {Name:mk6f9bf09d9a1574fea352675c579bd5b29a8c83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:11:28.822090  251080 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/apiserver.key.43b9df8c ...
	I0921 22:11:28.822105  251080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/apiserver.key.43b9df8c: {Name:mk02ae9ee31bcf5d402f8edd4ad6acaa82a351d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:11:28.822189  251080 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/apiserver.crt.43b9df8c -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/apiserver.crt
	I0921 22:11:28.822247  251080 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/apiserver.key.43b9df8c -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/apiserver.key
	I0921 22:11:28.822293  251080 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/proxy-client.key
	I0921 22:11:28.822308  251080 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/proxy-client.crt with IP's: []
	I0921 22:11:28.922715  251080 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/proxy-client.crt ...
	I0921 22:11:28.922741  251080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/proxy-client.crt: {Name:mkaf5c21db58b4a0b90357c15da03dae1abe71c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:11:28.922924  251080 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/proxy-client.key ...
	I0921 22:11:28.922938  251080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/proxy-client.key: {Name:mk91f1c41e1900ed0eb542cfae77ba7b1ff8febd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:11:28.923107  251080 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/10174.pem (1338 bytes)
	W0921 22:11:28.923145  251080 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/10174_empty.pem, impossibly tiny 0 bytes
	I0921 22:11:28.923157  251080 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca-key.pem (1679 bytes)
	I0921 22:11:28.923183  251080 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem (1078 bytes)
	I0921 22:11:28.923210  251080 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/cert.pem (1123 bytes)
	I0921 22:11:28.923233  251080 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/key.pem (1675 bytes)
	I0921 22:11:28.923271  251080 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/101742.pem (1708 bytes)
	I0921 22:11:28.923840  251080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0921 22:11:28.942334  251080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0921 22:11:28.959138  251080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0921 22:11:28.975925  251080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0921 22:11:28.992601  251080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0921 22:11:29.009145  251080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0921 22:11:29.025974  251080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0921 22:11:29.043889  251080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0921 22:11:29.061111  251080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/101742.pem --> /usr/share/ca-certificates/101742.pem (1708 bytes)
	I0921 22:11:29.078117  251080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0921 22:11:29.095326  251080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/10174.pem --> /usr/share/ca-certificates/10174.pem (1338 bytes)
	I0921 22:11:29.112457  251080 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0921 22:11:29.124660  251080 ssh_runner.go:195] Run: openssl version
	I0921 22:11:29.129304  251080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0921 22:11:29.136557  251080 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0921 22:11:29.139479  251080 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Sep 21 21:28 /usr/share/ca-certificates/minikubeCA.pem
	I0921 22:11:29.139517  251080 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0921 22:11:29.144088  251080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0921 22:11:29.151649  251080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10174.pem && ln -fs /usr/share/ca-certificates/10174.pem /etc/ssl/certs/10174.pem"
	I0921 22:11:29.158634  251080 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10174.pem
	I0921 22:11:29.161640  251080 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Sep 21 21:32 /usr/share/ca-certificates/10174.pem
	I0921 22:11:29.161682  251080 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10174.pem
	I0921 22:11:29.166192  251080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10174.pem /etc/ssl/certs/51391683.0"
	I0921 22:11:29.173529  251080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/101742.pem && ln -fs /usr/share/ca-certificates/101742.pem /etc/ssl/certs/101742.pem"
	I0921 22:11:29.181111  251080 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/101742.pem
	I0921 22:11:29.184130  251080 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Sep 21 21:32 /usr/share/ca-certificates/101742.pem
	I0921 22:11:29.184178  251080 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/101742.pem
	I0921 22:11:29.189023  251080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/101742.pem /etc/ssl/certs/3ec20f2e.0"
	I0921 22:11:29.196116  251080 kubeadm.go:396] StartCluster: {Name:default-k8s-different-port-20220921221118-10174 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:default-k8s-different-port-20220921221118-10174 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 22:11:29.196192  251080 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0921 22:11:29.196252  251080 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0921 22:11:29.220112  251080 cri.go:87] found id: ""
	I0921 22:11:29.220180  251080 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0921 22:11:29.227068  251080 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0921 22:11:29.234009  251080 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0921 22:11:29.234055  251080 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0921 22:11:29.240811  251080 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0921 22:11:29.240844  251080 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0921 22:11:29.281554  251080 kubeadm.go:317] [init] Using Kubernetes version: v1.25.2
	I0921 22:11:29.281632  251080 kubeadm.go:317] [preflight] Running pre-flight checks
	I0921 22:11:29.309304  251080 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I0921 22:11:29.309370  251080 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1017-gcp
	I0921 22:11:29.309403  251080 kubeadm.go:317] OS: Linux
	I0921 22:11:29.309445  251080 kubeadm.go:317] CGROUPS_CPU: enabled
	I0921 22:11:29.309491  251080 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I0921 22:11:29.309562  251080 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I0921 22:11:29.309615  251080 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I0921 22:11:29.309671  251080 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I0921 22:11:29.309719  251080 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I0921 22:11:29.309757  251080 kubeadm.go:317] CGROUPS_PIDS: enabled
	I0921 22:11:29.309798  251080 kubeadm.go:317] CGROUPS_HUGETLB: enabled
	I0921 22:11:29.309837  251080 kubeadm.go:317] CGROUPS_BLKIO: enabled
	I0921 22:11:29.374829  251080 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0921 22:11:29.374943  251080 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0921 22:11:29.375043  251080 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0921 22:11:29.498766  251080 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0921 22:11:25.530784  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:11:28.030733  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:11:30.031206  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:11:29.501998  251080 out.go:204]   - Generating certificates and keys ...
	I0921 22:11:29.502140  251080 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0921 22:11:29.502277  251080 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0921 22:11:29.597971  251080 kubeadm.go:317] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0921 22:11:29.835986  251080 kubeadm.go:317] [certs] Generating "front-proxy-ca" certificate and key
	I0921 22:11:30.089547  251080 kubeadm.go:317] [certs] Generating "front-proxy-client" certificate and key
	I0921 22:11:30.169634  251080 kubeadm.go:317] [certs] Generating "etcd/ca" certificate and key
	I0921 22:11:30.225195  251080 kubeadm.go:317] [certs] Generating "etcd/server" certificate and key
	I0921 22:11:30.225404  251080 kubeadm.go:317] [certs] etcd/server serving cert is signed for DNS names [default-k8s-different-port-20220921221118-10174 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0921 22:11:30.334625  251080 kubeadm.go:317] [certs] Generating "etcd/peer" certificate and key
	I0921 22:11:30.334942  251080 kubeadm.go:317] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-different-port-20220921221118-10174 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0921 22:11:30.454648  251080 kubeadm.go:317] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0921 22:11:30.667751  251080 kubeadm.go:317] [certs] Generating "apiserver-etcd-client" certificate and key
	I0921 22:11:30.842577  251080 kubeadm.go:317] [certs] Generating "sa" key and public key
	I0921 22:11:30.842710  251080 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0921 22:11:30.909448  251080 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0921 22:11:31.056256  251080 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0921 22:11:31.120718  251080 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0921 22:11:31.191075  251080 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0921 22:11:31.202857  251080 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0921 22:11:31.203759  251080 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0921 22:11:31.203851  251080 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I0921 22:11:31.284919  251080 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0921 22:11:28.336967  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:11:30.337024  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:11:31.287269  251080 out.go:204]   - Booting up control plane ...
	I0921 22:11:31.287395  251080 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0921 22:11:31.288963  251080 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0921 22:11:31.289889  251080 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0921 22:11:31.290600  251080 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0921 22:11:31.292356  251080 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0921 22:11:32.530623  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:11:35.030218  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:11:32.337947  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:11:34.836321  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:11:36.837370  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:11:37.294544  251080 kubeadm.go:317] [apiclient] All control plane components are healthy after 6.002117 seconds
	I0921 22:11:37.294700  251080 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0921 22:11:37.302999  251080 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0921 22:11:37.820634  251080 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs
	I0921 22:11:37.820909  251080 kubeadm.go:317] [mark-control-plane] Marking the node default-k8s-different-port-20220921221118-10174 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0921 22:11:38.328855  251080 kubeadm.go:317] [bootstrap-token] Using token: f60jp5.opo6lrzt47sur902
	I0921 22:11:38.330272  251080 out.go:204]   - Configuring RBAC rules ...
	I0921 22:11:38.330460  251080 kubeadm.go:317] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0921 22:11:38.335703  251080 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0921 22:11:38.340513  251080 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0921 22:11:38.342637  251080 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0921 22:11:38.344542  251080 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0921 22:11:38.346406  251080 kubeadm.go:317] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0921 22:11:38.353833  251080 kubeadm.go:317] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0921 22:11:38.556116  251080 kubeadm.go:317] [addons] Applied essential addon: CoreDNS
	I0921 22:11:38.780075  251080 kubeadm.go:317] [addons] Applied essential addon: kube-proxy
	I0921 22:11:38.781317  251080 kubeadm.go:317] 
	I0921 22:11:38.781428  251080 kubeadm.go:317] Your Kubernetes control-plane has initialized successfully!
	I0921 22:11:38.781465  251080 kubeadm.go:317] 
	I0921 22:11:38.781595  251080 kubeadm.go:317] To start using your cluster, you need to run the following as a regular user:
	I0921 22:11:38.781624  251080 kubeadm.go:317] 
	I0921 22:11:38.781667  251080 kubeadm.go:317]   mkdir -p $HOME/.kube
	I0921 22:11:38.781749  251080 kubeadm.go:317]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0921 22:11:38.781810  251080 kubeadm.go:317]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0921 22:11:38.781842  251080 kubeadm.go:317] 
	I0921 22:11:38.781971  251080 kubeadm.go:317] Alternatively, if you are the root user, you can run:
	I0921 22:11:38.781987  251080 kubeadm.go:317] 
	I0921 22:11:38.782044  251080 kubeadm.go:317]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0921 22:11:38.782061  251080 kubeadm.go:317] 
	I0921 22:11:38.782142  251080 kubeadm.go:317] You should now deploy a pod network to the cluster.
	I0921 22:11:38.782239  251080 kubeadm.go:317] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0921 22:11:38.782336  251080 kubeadm.go:317]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0921 22:11:38.782349  251080 kubeadm.go:317] 
	I0921 22:11:38.782445  251080 kubeadm.go:317] You can now join any number of control-plane nodes by copying certificate authorities
	I0921 22:11:38.782532  251080 kubeadm.go:317] and service account keys on each node and then running the following as root:
	I0921 22:11:38.782539  251080 kubeadm.go:317] 
	I0921 22:11:38.782640  251080 kubeadm.go:317]   kubeadm join control-plane.minikube.internal:8444 --token f60jp5.opo6lrzt47sur902 \
	I0921 22:11:38.782760  251080 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:b419e756666ce2e30bdac31039f21ad7242d26e2b9c03a55c5bc6fe8dcfddcf7 \
	I0921 22:11:38.782786  251080 kubeadm.go:317] 	--control-plane 
	I0921 22:11:38.782792  251080 kubeadm.go:317] 
	I0921 22:11:38.782886  251080 kubeadm.go:317] Then you can join any number of worker nodes by running the following on each as root:
	I0921 22:11:38.782893  251080 kubeadm.go:317] 
	I0921 22:11:38.782985  251080 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8444 --token f60jp5.opo6lrzt47sur902 \
	I0921 22:11:38.783105  251080 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:b419e756666ce2e30bdac31039f21ad7242d26e2b9c03a55c5bc6fe8dcfddcf7 
	I0921 22:11:38.785995  251080 kubeadm.go:317] W0921 22:11:29.273642     735 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0921 22:11:38.786254  251080 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1017-gcp\n", err: exit status 1
	I0921 22:11:38.786399  251080 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0921 22:11:38.786445  251080 cni.go:95] Creating CNI manager for ""
	I0921 22:11:38.786461  251080 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0921 22:11:38.788308  251080 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0921 22:11:37.030744  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:11:39.030828  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:11:39.337094  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:11:41.836184  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:11:38.789713  251080 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0921 22:11:38.793640  251080 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.2/kubectl ...
	I0921 22:11:38.793660  251080 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0921 22:11:38.808403  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0921 22:11:39.596042  251080 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0921 22:11:39.596097  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:39.596114  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl label nodes minikube.k8s.io/version=v1.27.0 minikube.k8s.io/commit=937c68716dfaac5b5ffa3b6655158d5d3472b8c4 minikube.k8s.io/name=default-k8s-different-port-20220921221118-10174 minikube.k8s.io/updated_at=2022_09_21T22_11_39_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:39.690430  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:39.696472  251080 ops.go:34] apiserver oom_adj: -16
	I0921 22:11:40.252956  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:40.753124  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:41.252898  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:41.752958  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:42.252749  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:42.752944  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:41.530878  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:11:44.030810  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:11:43.836287  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:11:45.837347  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:11:43.252940  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:43.752934  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:44.252898  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:44.752478  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:45.252903  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:45.752467  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:46.253256  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:46.752683  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:47.252892  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:47.752682  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:46.530737  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:11:49.030362  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:11:48.335973  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:11:50.336273  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:11:48.252790  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:48.752428  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:49.252346  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:49.753263  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:50.252919  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:50.752432  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:51.252537  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:51.752927  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:51.892900  251080 kubeadm.go:1067] duration metric: took 12.296861621s to wait for elevateKubeSystemPrivileges.
	I0921 22:11:51.892930  251080 kubeadm.go:398] StartCluster complete in 22.696819381s
	I0921 22:11:51.892946  251080 settings.go:142] acquiring lock: {Name:mk2e017b0c75e33bad2b3a546d00214b38a21694 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:11:51.893033  251080 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
	I0921 22:11:51.894853  251080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig: {Name:mkdec5faf1bb182b50a3cd5458b352f38075dc15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:11:52.410836  251080 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20220921221118-10174" rescaled to 1
	I0921 22:11:52.410900  251080 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0921 22:11:52.412753  251080 out.go:177] * Verifying Kubernetes components...
	I0921 22:11:52.410955  251080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0921 22:11:52.410996  251080 addons.go:412] enableAddons start: toEnable=map[], additional=[]
	I0921 22:11:52.411177  251080 config.go:180] Loaded profile config "default-k8s-different-port-20220921221118-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.2
	I0921 22:11:52.414055  251080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0921 22:11:52.414125  251080 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-different-port-20220921221118-10174"
	I0921 22:11:52.414149  251080 addons.go:153] Setting addon storage-provisioner=true in "default-k8s-different-port-20220921221118-10174"
	W0921 22:11:52.414157  251080 addons.go:162] addon storage-provisioner should already be in state true
	I0921 22:11:52.414160  251080 addons.go:65] Setting default-storageclass=true in profile "default-k8s-different-port-20220921221118-10174"
	I0921 22:11:52.414177  251080 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20220921221118-10174"
	I0921 22:11:52.414210  251080 host.go:66] Checking if "default-k8s-different-port-20220921221118-10174" exists ...
	I0921 22:11:52.414507  251080 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221118-10174 --format={{.State.Status}}
	I0921 22:11:52.414719  251080 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221118-10174 --format={{.State.Status}}
	I0921 22:11:52.453309  251080 addons.go:153] Setting addon default-storageclass=true in "default-k8s-different-port-20220921221118-10174"
	W0921 22:11:52.453343  251080 addons.go:162] addon default-storageclass should already be in state true
	I0921 22:11:52.453370  251080 host.go:66] Checking if "default-k8s-different-port-20220921221118-10174" exists ...
	I0921 22:11:52.456214  251080 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0921 22:11:52.453863  251080 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221118-10174 --format={{.State.Status}}
	I0921 22:11:52.457793  251080 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0921 22:11:52.457817  251080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0921 22:11:52.457870  251080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:11:52.489924  251080 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0921 22:11:52.489952  251080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0921 22:11:52.490001  251080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:11:52.499827  251080 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49418 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa Username:docker}
	I0921 22:11:52.521036  251080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0921 22:11:52.523074  251080 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20220921221118-10174" to be "Ready" ...
	I0921 22:11:52.524139  251080 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49418 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa Username:docker}
	I0921 22:11:52.694618  251080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0921 22:11:52.698974  251080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0921 22:11:53.100405  251080 start.go:810] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS
	I0921 22:11:53.285087  251080 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0921 22:11:51.030761  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:11:53.030886  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:11:55.031051  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:11:52.337131  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:11:54.836927  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:11:53.286473  251080 addons.go:414] enableAddons completed in 875.500055ms
	I0921 22:11:54.531242  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:11:56.531286  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:11:57.531654  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:12:00.031155  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:11:57.336103  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:11:59.336841  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:01.836213  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:11:59.030486  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:01.030832  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:03.031401  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:02.530785  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:12:05.030896  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:12:03.836938  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:06.336349  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:05.530847  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:08.030644  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:07.031537  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:12:09.530730  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:12:08.837257  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:11.336377  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:10.031510  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:12.531037  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:11.531989  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:12:14.030729  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:12:13.837027  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:16.336212  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:14.531388  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:17.030653  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:16.031195  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:12:18.530817  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:12:18.837013  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:21.336931  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:19.531491  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:22.030834  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:20.531145  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:12:23.030753  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:12:25.033218  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:12:23.836124  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:25.836792  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:24.530794  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:27.030911  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:27.530979  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:12:30.030328  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:12:28.337008  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:30.836665  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:29.031263  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:31.531092  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:32.031104  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:12:34.530719  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:12:32.836819  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:35.336220  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:34.030989  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:36.530772  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:37.031009  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:12:39.530361  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:12:37.336820  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:39.837041  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:39.030837  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:41.030918  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:43.031395  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:41.530781  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:12:43.531407  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:12:42.336354  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:44.836264  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:46.836827  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:45.530777  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:47.531054  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:46.030030  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:12:48.030327  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:12:50.030839  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:12:49.336608  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:51.336789  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:49.531276  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:51.531467  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:52.031223  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:12:54.032232  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:12:53.836292  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:55.836687  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:54.030859  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:56.530994  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:56.531050  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:12:59.030372  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:12:57.836753  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:59.836812  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:59.030908  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:13:01.031504  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:13:01.031167  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:13:03.531055  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:13:02.336234  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:13:04.337102  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:13:06.836799  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:13:03.531248  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:13:06.031222  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:13:06.030340  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:13:08.030411  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:13:10.031005  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:13:08.836960  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:13:11.336661  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:13:08.530717  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:13:10.531407  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:13:13.030797  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:13:13.338612  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:13:13.338639  242109 node_ready.go:38] duration metric: took 4m0.008551222s waiting for node "no-preload-20220921220832-10174" to be "Ready" ...
	I0921 22:13:13.340854  242109 out.go:177] 
	W0921 22:13:13.342210  242109 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0921 22:13:13.342226  242109 out.go:239] * 
	W0921 22:13:13.342954  242109 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0921 22:13:13.344170  242109 out.go:177] 
	I0921 22:13:12.530866  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:13:15.030413  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:13:15.031195  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:13:17.531223  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:13:17.031193  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:13:19.530475  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:13:20.030902  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:13:22.531093  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:13:21.530536  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:13:23.531098  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:13:25.030827  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:13:27.031201  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:13:25.531210  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:13:28.030151  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:13:30.030808  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:13:29.530660  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:13:32.030955  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:13:32.530349  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:13:34.530517  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:13:34.031037  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:13:36.031513  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:13:37.031085  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:13:39.031177  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:13:38.531161  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:13:41.030620  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:13:43.031380  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:13:41.531099  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:13:44.030710  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:13:45.530783  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:13:48.030568  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:13:46.031382  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:13:48.531065  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:13:50.031250  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:13:52.531321  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:13:51.031119  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:13:53.529989  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:13:55.031106  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:13:57.530922  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:13:55.530775  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:13:58.030846  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:14:00.030925  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:14:02.531359  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:14:00.530982  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:14:03.030176  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:14:05.030812  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:14:05.030993  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:14:07.530797  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:14:07.530511  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:14:10.030457  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:14:09.530913  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:14:11.531450  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:14:12.031016  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:14:14.031227  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:14:14.030764  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:14:16.031283  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:14:18.031326  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:14:16.531495  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:14:19.030297  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:14:20.530765  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:14:23.031166  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:14:21.030808  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:14:23.530011  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:14:25.530744  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:14:27.531146  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:14:25.530854  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:14:28.030889  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:14:30.030823  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:14:32.031295  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:14:30.530876  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:14:33.030439  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:14:34.531158  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:14:37.031127  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:14:35.530538  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:14:37.530615  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:14:39.531497  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:14:39.531189  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:14:42.031109  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:14:42.030564  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:14:44.031074  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:14:44.531144  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:14:47.031215  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:14:46.531696  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:14:49.030430  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:14:49.531212  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:14:52.031591  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:14:51.030872  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:14:53.031128  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:14:54.530764  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:14:56.531628  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:14:55.531381  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:14:58.030638  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:14:59.031443  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:15:01.530765  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:15:00.530527  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:15:02.530909  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:15:05.031173  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:15:03.531199  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:15:06.031212  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:15:07.530981  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:15:09.531156  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:15:08.531535  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:15:11.030786  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:15:13.031313  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:15:12.031090  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:15:14.031428  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:15:15.531810  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:15:18.031388  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:15:16.530925  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:15:19.025682  247121 pod_ready.go:81] duration metric: took 4m0.400563713s waiting for pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace to be "Ready" ...
	E0921 22:15:19.025707  247121 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace to be "Ready" (will not retry!)
	I0921 22:15:19.025727  247121 pod_ready.go:38] duration metric: took 4m1.599877119s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0921 22:15:19.025750  247121 kubeadm.go:631] restartCluster took 5m11.841964255s
	W0921 22:15:19.026022  247121 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0921 22:15:19.026073  247121 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0921 22:15:21.378094  247121 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (2.351995179s)
	I0921 22:15:21.378181  247121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0921 22:15:21.388550  247121 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0921 22:15:21.396088  247121 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0921 22:15:21.396145  247121 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0921 22:15:21.402886  247121 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0921 22:15:21.402927  247121 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0921 22:15:21.449138  247121 kubeadm.go:317] [init] Using Kubernetes version: v1.16.0
	I0921 22:15:21.449228  247121 kubeadm.go:317] [preflight] Running pre-flight checks
	I0921 22:15:21.477487  247121 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I0921 22:15:21.477569  247121 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1017-gcp
	I0921 22:15:21.477618  247121 kubeadm.go:317] OS: Linux
	I0921 22:15:21.477661  247121 kubeadm.go:317] CGROUPS_CPU: enabled
	I0921 22:15:21.477710  247121 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I0921 22:15:21.477751  247121 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I0921 22:15:21.477792  247121 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I0921 22:15:21.477837  247121 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I0921 22:15:21.477880  247121 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I0921 22:15:21.549871  247121 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0921 22:15:21.550044  247121 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0921 22:15:21.550184  247121 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0921 22:15:21.684151  247121 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0921 22:15:21.686278  247121 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0921 22:15:21.693456  247121 kubeadm.go:317] [kubelet-start] Activating the kubelet service
	I0921 22:15:21.766666  247121 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0921 22:15:21.773015  247121 out.go:204]   - Generating certificates and keys ...
	I0921 22:15:21.773194  247121 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0921 22:15:21.773288  247121 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0921 22:15:21.773394  247121 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0921 22:15:21.773481  247121 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I0921 22:15:21.773609  247121 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I0921 22:15:21.773694  247121 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I0921 22:15:21.773794  247121 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I0921 22:15:21.773873  247121 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I0921 22:15:21.773986  247121 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0921 22:15:21.774097  247121 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0921 22:15:21.774176  247121 kubeadm.go:317] [certs] Using the existing "sa" key
	I0921 22:15:21.774255  247121 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0921 22:15:22.127958  247121 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0921 22:15:22.390000  247121 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0921 22:15:22.602949  247121 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0921 22:15:22.872836  247121 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0921 22:15:22.874115  247121 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0921 22:15:20.531309  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:15:23.030497  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:15:22.876290  247121 out.go:204]   - Booting up control plane ...
	I0921 22:15:22.876378  247121 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0921 22:15:22.882073  247121 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0921 22:15:22.883893  247121 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0921 22:15:22.884961  247121 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0921 22:15:22.887367  247121 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0921 22:15:25.031107  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:15:27.531379  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:15:31.389627  247121 kubeadm.go:317] [apiclient] All control plane components are healthy after 8.502254 seconds
	I0921 22:15:31.389810  247121 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0921 22:15:31.400525  247121 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I0921 22:15:31.915530  247121 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs
	I0921 22:15:31.915694  247121 kubeadm.go:317] [mark-control-plane] Marking the node old-k8s-version-20220921220722-10174 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0921 22:15:32.422920  247121 kubeadm.go:317] [bootstrap-token] Using token: 11qd7w.gdk44a66vaieoafi
	I0921 22:15:32.424367  247121 out.go:204]   - Configuring RBAC rules ...
	I0921 22:15:32.424501  247121 kubeadm.go:317] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0921 22:15:32.428711  247121 kubeadm.go:317] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0921 22:15:32.431641  247121 kubeadm.go:317] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0921 22:15:32.433601  247121 kubeadm.go:317] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0921 22:15:32.435558  247121 kubeadm.go:317] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0921 22:15:32.480798  247121 kubeadm.go:317] [addons] Applied essential addon: CoreDNS
	I0921 22:15:32.836938  247121 kubeadm.go:317] [addons] Applied essential addon: kube-proxy
	I0921 22:15:32.838227  247121 kubeadm.go:317] 
	I0921 22:15:32.838305  247121 kubeadm.go:317] Your Kubernetes control-plane has initialized successfully!
	I0921 22:15:32.838317  247121 kubeadm.go:317] 
	I0921 22:15:32.838409  247121 kubeadm.go:317] To start using your cluster, you need to run the following as a regular user:
	I0921 22:15:32.838419  247121 kubeadm.go:317] 
	I0921 22:15:32.838450  247121 kubeadm.go:317]   mkdir -p $HOME/.kube
	I0921 22:15:32.838553  247121 kubeadm.go:317]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0921 22:15:32.838638  247121 kubeadm.go:317]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0921 22:15:32.838650  247121 kubeadm.go:317] 
	I0921 22:15:32.838727  247121 kubeadm.go:317] You should now deploy a pod network to the cluster.
	I0921 22:15:32.838800  247121 kubeadm.go:317] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0921 22:15:32.838907  247121 kubeadm.go:317]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0921 22:15:32.838918  247121 kubeadm.go:317] 
	I0921 22:15:32.839009  247121 kubeadm.go:317] You can now join any number of control-plane nodes by copying certificate authorities 
	I0921 22:15:32.839087  247121 kubeadm.go:317] and service account keys on each node and then running the following as root:
	I0921 22:15:32.839100  247121 kubeadm.go:317] 
	I0921 22:15:32.839166  247121 kubeadm.go:317]   kubeadm join control-plane.minikube.internal:8443 --token 11qd7w.gdk44a66vaieoafi \
	I0921 22:15:32.839252  247121 kubeadm.go:317]     --discovery-token-ca-cert-hash sha256:b419e756666ce2e30bdac31039f21ad7242d26e2b9c03a55c5bc6fe8dcfddcf7 \
	I0921 22:15:32.839298  247121 kubeadm.go:317]     --control-plane 	  
	I0921 22:15:32.839310  247121 kubeadm.go:317] 
	I0921 22:15:32.839399  247121 kubeadm.go:317] Then you can join any number of worker nodes by running the following on each as root:
	I0921 22:15:32.839414  247121 kubeadm.go:317] 
	I0921 22:15:32.839511  247121 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8443 --token 11qd7w.gdk44a66vaieoafi \
	I0921 22:15:32.839602  247121 kubeadm.go:317]     --discovery-token-ca-cert-hash sha256:b419e756666ce2e30bdac31039f21ad7242d26e2b9c03a55c5bc6fe8dcfddcf7 
	I0921 22:15:32.841219  247121 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1017-gcp\n", err: exit status 1
	I0921 22:15:32.841316  247121 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0921 22:15:32.841350  247121 cni.go:95] Creating CNI manager for ""
	I0921 22:15:32.841362  247121 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0921 22:15:32.843196  247121 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0921 22:15:30.030428  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:15:32.031245  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:15:32.844505  247121 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0921 22:15:32.848119  247121 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.16.0/kubectl ...
	I0921 22:15:32.848139  247121 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0921 22:15:32.861406  247121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0921 22:15:33.081539  247121 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0921 22:15:33.081628  247121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:15:33.081638  247121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.27.0 minikube.k8s.io/commit=937c68716dfaac5b5ffa3b6655158d5d3472b8c4 minikube.k8s.io/name=old-k8s-version-20220921220722-10174 minikube.k8s.io/updated_at=2022_09_21T22_15_33_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:15:33.202299  247121 ops.go:34] apiserver oom_adj: -16
	I0921 22:15:33.202435  247121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:15:33.787711  247121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:15:34.287913  247121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:15:34.787241  247121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:15:35.287775  247121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:15:34.531568  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:15:36.531614  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:15:35.786968  247121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:15:36.287553  247121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:15:36.787393  247121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:15:37.287388  247121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:15:37.787889  247121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:15:38.287160  247121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:15:38.787974  247121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:15:39.287230  247121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:15:39.787669  247121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:15:40.287712  247121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:15:39.031649  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:15:41.531271  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:15:40.787627  247121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:15:41.287056  247121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:15:41.787448  247121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:15:42.287147  247121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:15:42.788033  247121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:15:43.287821  247121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:15:43.787052  247121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:15:44.287156  247121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:15:44.787441  247121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:15:45.287162  247121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:15:44.031379  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:15:46.530779  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:15:45.787785  247121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:15:46.287589  247121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:15:46.787592  247121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:15:47.287976  247121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:15:47.787787  247121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:15:48.287665  247121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:15:48.787366  247121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:15:48.857183  247121 kubeadm.go:1067] duration metric: took 15.775622095s to wait for elevateKubeSystemPrivileges.
	I0921 22:15:48.857231  247121 kubeadm.go:398] StartCluster complete in 5m41.717623944s
	I0921 22:15:48.857253  247121 settings.go:142] acquiring lock: {Name:mk2e017b0c75e33bad2b3a546d00214b38a21694 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:15:48.857430  247121 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
	I0921 22:15:48.859451  247121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig: {Name:mkdec5faf1bb182b50a3cd5458b352f38075dc15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:15:49.387181  247121 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "old-k8s-version-20220921220722-10174" rescaled to 1
	I0921 22:15:49.387247  247121 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0921 22:15:49.389908  247121 out.go:177] * Verifying Kubernetes components...
	I0921 22:15:49.387285  247121 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0921 22:15:49.387336  247121 addons.go:412] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0921 22:15:49.387501  247121 config.go:180] Loaded profile config "old-k8s-version-20220921220722-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0921 22:15:49.391233  247121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0921 22:15:49.391285  247121 addons.go:65] Setting storage-provisioner=true in profile "old-k8s-version-20220921220722-10174"
	I0921 22:15:49.391318  247121 addons.go:153] Setting addon storage-provisioner=true in "old-k8s-version-20220921220722-10174"
	W0921 22:15:49.391331  247121 addons.go:162] addon storage-provisioner should already be in state true
	I0921 22:15:49.391335  247121 addons.go:65] Setting default-storageclass=true in profile "old-k8s-version-20220921220722-10174"
	I0921 22:15:49.391353  247121 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-20220921220722-10174"
	I0921 22:15:49.391362  247121 addons.go:65] Setting metrics-server=true in profile "old-k8s-version-20220921220722-10174"
	I0921 22:15:49.391374  247121 addons.go:65] Setting dashboard=true in profile "old-k8s-version-20220921220722-10174"
	I0921 22:15:49.391391  247121 addons.go:153] Setting addon metrics-server=true in "old-k8s-version-20220921220722-10174"
	I0921 22:15:49.391402  247121 addons.go:153] Setting addon dashboard=true in "old-k8s-version-20220921220722-10174"
	W0921 22:15:49.391409  247121 addons.go:162] addon metrics-server should already be in state true
	I0921 22:15:49.391387  247121 host.go:66] Checking if "old-k8s-version-20220921220722-10174" exists ...
	I0921 22:15:49.391459  247121 host.go:66] Checking if "old-k8s-version-20220921220722-10174" exists ...
	W0921 22:15:49.391411  247121 addons.go:162] addon dashboard should already be in state true
	I0921 22:15:49.391517  247121 host.go:66] Checking if "old-k8s-version-20220921220722-10174" exists ...
	I0921 22:15:49.391742  247121 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220921220722-10174 --format={{.State.Status}}
	I0921 22:15:49.391925  247121 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220921220722-10174 --format={{.State.Status}}
	I0921 22:15:49.391969  247121 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220921220722-10174 --format={{.State.Status}}
	I0921 22:15:49.391979  247121 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220921220722-10174 --format={{.State.Status}}
	I0921 22:15:49.429024  247121 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0921 22:15:49.430843  247121 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0921 22:15:49.430870  247121 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0921 22:15:49.430934  247121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220722-10174
	I0921 22:15:49.432886  247121 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0921 22:15:49.432341  247121 addons.go:153] Setting addon default-storageclass=true in "old-k8s-version-20220921220722-10174"
	W0921 22:15:49.435066  247121 addons.go:162] addon default-storageclass should already be in state true
	I0921 22:15:49.435100  247121 host.go:66] Checking if "old-k8s-version-20220921220722-10174" exists ...
	I0921 22:15:49.435170  247121 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0921 22:15:49.435198  247121 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0921 22:15:49.435253  247121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220722-10174
	I0921 22:15:49.435329  247121 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.6.0
	I0921 22:15:49.435643  247121 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220921220722-10174 --format={{.State.Status}}
	I0921 22:15:49.438643  247121 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0921 22:15:49.440303  247121 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0921 22:15:49.440329  247121 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0921 22:15:49.440382  247121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220722-10174
	I0921 22:15:49.475535  247121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49413 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/old-k8s-version-20220921220722-10174/id_rsa Username:docker}
	I0921 22:15:49.476717  247121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49413 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/old-k8s-version-20220921220722-10174/id_rsa Username:docker}
	I0921 22:15:49.477035  247121 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0921 22:15:49.477179  247121 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0921 22:15:49.477273  247121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220722-10174
	I0921 22:15:49.487555  247121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49413 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/old-k8s-version-20220921220722-10174/id_rsa Username:docker}
	I0921 22:15:49.510365  247121 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-20220921220722-10174" to be "Ready" ...
	I0921 22:15:49.510586  247121 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0921 22:15:49.512994  247121 node_ready.go:49] node "old-k8s-version-20220921220722-10174" has status "Ready":"True"
	I0921 22:15:49.513015  247121 node_ready.go:38] duration metric: took 2.614704ms waiting for node "old-k8s-version-20220921220722-10174" to be "Ready" ...
	I0921 22:15:49.513026  247121 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0921 22:15:49.517116  247121 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-2mph9" in "kube-system" namespace to be "Ready" ...
	I0921 22:15:49.523194  247121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49413 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/old-k8s-version-20220921220722-10174/id_rsa Username:docker}
	I0921 22:15:49.696090  247121 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0921 22:15:49.696275  247121 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0921 22:15:49.696299  247121 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0921 22:15:49.699280  247121 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0921 22:15:49.699348  247121 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0921 22:15:49.876210  247121 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0921 22:15:49.876248  247121 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0921 22:15:49.876623  247121 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0921 22:15:49.876645  247121 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0921 22:15:49.878450  247121 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0921 22:15:49.898514  247121 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0921 22:15:49.898544  247121 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0921 22:15:49.981838  247121 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0921 22:15:49.981884  247121 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0921 22:15:50.077044  247121 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0921 22:15:50.077071  247121 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4206 bytes)
	I0921 22:15:50.080895  247121 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0921 22:15:50.190260  247121 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0921 22:15:50.190292  247121 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0921 22:15:50.276211  247121 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0921 22:15:50.276296  247121 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0921 22:15:50.376927  247121 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0921 22:15:50.376971  247121 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0921 22:15:50.397367  247121 start.go:810] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS
	I0921 22:15:50.477570  247121 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0921 22:15:50.477605  247121 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0921 22:15:50.502035  247121 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0921 22:15:50.502070  247121 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0921 22:15:50.600971  247121 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0921 22:15:50.786449  247121 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.090310728s)
	I0921 22:15:51.185446  247121 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.104502698s)
	I0921 22:15:51.185485  247121 addons.go:383] Verifying addon metrics-server=true in "old-k8s-version-20220921220722-10174"
	I0921 22:15:51.591577  247121 pod_ready.go:102] pod "coredns-5644d7b6d9-2mph9" in "kube-system" namespace has status "Ready":"False"
	I0921 22:15:51.793342  247121 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.192307416s)
	I0921 22:15:51.795115  247121 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0921 22:15:48.531527  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:15:51.031482  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:15:52.532857  251080 node_ready.go:38] duration metric: took 4m0.009753586s waiting for node "default-k8s-different-port-20220921221118-10174" to be "Ready" ...
	I0921 22:15:52.535705  251080 out.go:177] 
	W0921 22:15:52.537201  251080 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0921 22:15:52.537217  251080 out.go:239] * 
	W0921 22:15:52.537962  251080 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0921 22:15:52.539605  251080 out.go:177] 
	I0921 22:15:51.796853  247121 addons.go:414] enableAddons completed in 2.409520824s
	I0921 22:15:54.083167  247121 pod_ready.go:102] pod "coredns-5644d7b6d9-2mph9" in "kube-system" namespace has status "Ready":"False"
	I0921 22:15:56.582916  247121 pod_ready.go:102] pod "coredns-5644d7b6d9-2mph9" in "kube-system" namespace has status "Ready":"False"
	I0921 22:15:58.584402  247121 pod_ready.go:92] pod "coredns-5644d7b6d9-2mph9" in "kube-system" namespace has status "Ready":"True"
	I0921 22:15:58.584436  247121 pod_ready.go:81] duration metric: took 9.067293296s waiting for pod "coredns-5644d7b6d9-2mph9" in "kube-system" namespace to be "Ready" ...
	I0921 22:15:58.584454  247121 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-l8c26" in "kube-system" namespace to be "Ready" ...
	I0921 22:15:58.589269  247121 pod_ready.go:92] pod "kube-proxy-l8c26" in "kube-system" namespace has status "Ready":"True"
	I0921 22:15:58.589291  247121 pod_ready.go:81] duration metric: took 4.828761ms waiting for pod "kube-proxy-l8c26" in "kube-system" namespace to be "Ready" ...
	I0921 22:15:58.589300  247121 pod_ready.go:38] duration metric: took 9.076263342s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0921 22:15:58.589320  247121 api_server.go:51] waiting for apiserver process to appear ...
	I0921 22:15:58.589363  247121 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 22:15:58.602293  247121 api_server.go:71] duration metric: took 9.215008123s to wait for apiserver process to appear ...
	I0921 22:15:58.602321  247121 api_server.go:87] waiting for apiserver healthz status ...
	I0921 22:15:58.602334  247121 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0921 22:15:58.608712  247121 api_server.go:266] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0921 22:15:58.609902  247121 api_server.go:140] control plane version: v1.16.0
	I0921 22:15:58.609924  247121 api_server.go:130] duration metric: took 7.595623ms to wait for apiserver health ...
	I0921 22:15:58.609933  247121 system_pods.go:43] waiting for kube-system pods to appear ...
	I0921 22:15:58.613709  247121 system_pods.go:59] 5 kube-system pods found
	I0921 22:15:58.613730  247121 system_pods.go:61] "coredns-5644d7b6d9-2mph9" [fe499b53-bc40-494d-8725-da0a3d32fa6d] Running
	I0921 22:15:58.613741  247121 system_pods.go:61] "kindnet-dkk77" [c801adc6-e95c-41dc-a02f-6bc5fd757f39] Running
	I0921 22:15:58.613751  247121 system_pods.go:61] "kube-proxy-l8c26" [a061610f-fbed-47b1-b399-43bb1247c1d7] Running
	I0921 22:15:58.613769  247121 system_pods.go:61] "metrics-server-7958775c-9r5h2" [d508d46d-acb3-4709-aef2-8e9a0fca9d14] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0921 22:15:58.613776  247121 system_pods.go:61] "storage-provisioner" [e343d022-0a1b-45e3-9ae2-e0056269776b] Running
	I0921 22:15:58.613785  247121 system_pods.go:74] duration metric: took 3.846457ms to wait for pod list to return data ...
	I0921 22:15:58.613791  247121 default_sa.go:34] waiting for default service account to be created ...
	I0921 22:15:58.678420  247121 default_sa.go:45] found service account: "default"
	I0921 22:15:58.678449  247121 default_sa.go:55] duration metric: took 64.649367ms for default service account to be created ...
	I0921 22:15:58.678461  247121 system_pods.go:116] waiting for k8s-apps to be running ...
	I0921 22:15:58.682534  247121 system_pods.go:86] 5 kube-system pods found
	I0921 22:15:58.682568  247121 system_pods.go:89] "coredns-5644d7b6d9-2mph9" [fe499b53-bc40-494d-8725-da0a3d32fa6d] Running
	I0921 22:15:58.682577  247121 system_pods.go:89] "kindnet-dkk77" [c801adc6-e95c-41dc-a02f-6bc5fd757f39] Running
	I0921 22:15:58.682585  247121 system_pods.go:89] "kube-proxy-l8c26" [a061610f-fbed-47b1-b399-43bb1247c1d7] Running
	I0921 22:15:58.682598  247121 system_pods.go:89] "metrics-server-7958775c-9r5h2" [d508d46d-acb3-4709-aef2-8e9a0fca9d14] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0921 22:15:58.682605  247121 system_pods.go:89] "storage-provisioner" [e343d022-0a1b-45e3-9ae2-e0056269776b] Running
	I0921 22:15:58.682623  247121 retry.go:31] will retry after 227.257272ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0921 22:15:58.977556  247121 system_pods.go:86] 5 kube-system pods found
	I0921 22:15:58.977644  247121 system_pods.go:89] "coredns-5644d7b6d9-2mph9" [fe499b53-bc40-494d-8725-da0a3d32fa6d] Running
	I0921 22:15:58.977661  247121 system_pods.go:89] "kindnet-dkk77" [c801adc6-e95c-41dc-a02f-6bc5fd757f39] Running
	I0921 22:15:58.977668  247121 system_pods.go:89] "kube-proxy-l8c26" [a061610f-fbed-47b1-b399-43bb1247c1d7] Running
	I0921 22:15:58.977680  247121 system_pods.go:89] "metrics-server-7958775c-9r5h2" [d508d46d-acb3-4709-aef2-8e9a0fca9d14] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0921 22:15:58.977706  247121 system_pods.go:89] "storage-provisioner" [e343d022-0a1b-45e3-9ae2-e0056269776b] Running
	I0921 22:15:58.977734  247121 retry.go:31] will retry after 307.639038ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0921 22:15:59.290548  247121 system_pods.go:86] 5 kube-system pods found
	I0921 22:15:59.290587  247121 system_pods.go:89] "coredns-5644d7b6d9-2mph9" [fe499b53-bc40-494d-8725-da0a3d32fa6d] Running
	I0921 22:15:59.290596  247121 system_pods.go:89] "kindnet-dkk77" [c801adc6-e95c-41dc-a02f-6bc5fd757f39] Running
	I0921 22:15:59.290603  247121 system_pods.go:89] "kube-proxy-l8c26" [a061610f-fbed-47b1-b399-43bb1247c1d7] Running
	I0921 22:15:59.290614  247121 system_pods.go:89] "metrics-server-7958775c-9r5h2" [d508d46d-acb3-4709-aef2-8e9a0fca9d14] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0921 22:15:59.290631  247121 system_pods.go:89] "storage-provisioner" [e343d022-0a1b-45e3-9ae2-e0056269776b] Running
	I0921 22:15:59.290656  247121 retry.go:31] will retry after 348.248857ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0921 22:15:59.643573  247121 system_pods.go:86] 5 kube-system pods found
	I0921 22:15:59.643608  247121 system_pods.go:89] "coredns-5644d7b6d9-2mph9" [fe499b53-bc40-494d-8725-da0a3d32fa6d] Running
	I0921 22:15:59.643617  247121 system_pods.go:89] "kindnet-dkk77" [c801adc6-e95c-41dc-a02f-6bc5fd757f39] Running
	I0921 22:15:59.643629  247121 system_pods.go:89] "kube-proxy-l8c26" [a061610f-fbed-47b1-b399-43bb1247c1d7] Running
	I0921 22:15:59.643645  247121 system_pods.go:89] "metrics-server-7958775c-9r5h2" [d508d46d-acb3-4709-aef2-8e9a0fca9d14] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0921 22:15:59.643657  247121 system_pods.go:89] "storage-provisioner" [e343d022-0a1b-45e3-9ae2-e0056269776b] Running
	I0921 22:15:59.643677  247121 retry.go:31] will retry after 437.769008ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0921 22:16:00.086306  247121 system_pods.go:86] 5 kube-system pods found
	I0921 22:16:00.086345  247121 system_pods.go:89] "coredns-5644d7b6d9-2mph9" [fe499b53-bc40-494d-8725-da0a3d32fa6d] Running
	I0921 22:16:00.086355  247121 system_pods.go:89] "kindnet-dkk77" [c801adc6-e95c-41dc-a02f-6bc5fd757f39] Running
	I0921 22:16:00.086367  247121 system_pods.go:89] "kube-proxy-l8c26" [a061610f-fbed-47b1-b399-43bb1247c1d7] Running
	I0921 22:16:00.086378  247121 system_pods.go:89] "metrics-server-7958775c-9r5h2" [d508d46d-acb3-4709-aef2-8e9a0fca9d14] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0921 22:16:00.086388  247121 system_pods.go:89] "storage-provisioner" [e343d022-0a1b-45e3-9ae2-e0056269776b] Running
	I0921 22:16:00.086407  247121 retry.go:31] will retry after 665.003868ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0921 22:16:00.756535  247121 system_pods.go:86] 5 kube-system pods found
	I0921 22:16:00.756571  247121 system_pods.go:89] "coredns-5644d7b6d9-2mph9" [fe499b53-bc40-494d-8725-da0a3d32fa6d] Running
	I0921 22:16:00.756581  247121 system_pods.go:89] "kindnet-dkk77" [c801adc6-e95c-41dc-a02f-6bc5fd757f39] Running
	I0921 22:16:00.756588  247121 system_pods.go:89] "kube-proxy-l8c26" [a061610f-fbed-47b1-b399-43bb1247c1d7] Running
	I0921 22:16:00.756596  247121 system_pods.go:89] "metrics-server-7958775c-9r5h2" [d508d46d-acb3-4709-aef2-8e9a0fca9d14] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0921 22:16:00.756605  247121 system_pods.go:89] "storage-provisioner" [e343d022-0a1b-45e3-9ae2-e0056269776b] Running
	I0921 22:16:00.756628  247121 retry.go:31] will retry after 655.575962ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0921 22:16:01.416694  247121 system_pods.go:86] 5 kube-system pods found
	I0921 22:16:01.416728  247121 system_pods.go:89] "coredns-5644d7b6d9-2mph9" [fe499b53-bc40-494d-8725-da0a3d32fa6d] Running
	I0921 22:16:01.416737  247121 system_pods.go:89] "kindnet-dkk77" [c801adc6-e95c-41dc-a02f-6bc5fd757f39] Running
	I0921 22:16:01.416745  247121 system_pods.go:89] "kube-proxy-l8c26" [a061610f-fbed-47b1-b399-43bb1247c1d7] Running
	I0921 22:16:01.416756  247121 system_pods.go:89] "metrics-server-7958775c-9r5h2" [d508d46d-acb3-4709-aef2-8e9a0fca9d14] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0921 22:16:01.416764  247121 system_pods.go:89] "storage-provisioner" [e343d022-0a1b-45e3-9ae2-e0056269776b] Running
	I0921 22:16:01.416785  247121 retry.go:31] will retry after 812.142789ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0921 22:16:02.256014  247121 system_pods.go:86] 5 kube-system pods found
	I0921 22:16:02.256109  247121 system_pods.go:89] "coredns-5644d7b6d9-2mph9" [fe499b53-bc40-494d-8725-da0a3d32fa6d] Running
	I0921 22:16:02.256132  247121 system_pods.go:89] "kindnet-dkk77" [c801adc6-e95c-41dc-a02f-6bc5fd757f39] Running
	I0921 22:16:02.256146  247121 system_pods.go:89] "kube-proxy-l8c26" [a061610f-fbed-47b1-b399-43bb1247c1d7] Running
	I0921 22:16:02.256171  247121 system_pods.go:89] "metrics-server-7958775c-9r5h2" [d508d46d-acb3-4709-aef2-8e9a0fca9d14] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0921 22:16:02.256187  247121 system_pods.go:89] "storage-provisioner" [e343d022-0a1b-45e3-9ae2-e0056269776b] Running
	I0921 22:16:02.256207  247121 retry.go:31] will retry after 1.109165795s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0921 22:16:03.369475  247121 system_pods.go:86] 5 kube-system pods found
	I0921 22:16:03.369504  247121 system_pods.go:89] "coredns-5644d7b6d9-2mph9" [fe499b53-bc40-494d-8725-da0a3d32fa6d] Running
	I0921 22:16:03.369511  247121 system_pods.go:89] "kindnet-dkk77" [c801adc6-e95c-41dc-a02f-6bc5fd757f39] Running
	I0921 22:16:03.369516  247121 system_pods.go:89] "kube-proxy-l8c26" [a061610f-fbed-47b1-b399-43bb1247c1d7] Running
	I0921 22:16:03.369523  247121 system_pods.go:89] "metrics-server-7958775c-9r5h2" [d508d46d-acb3-4709-aef2-8e9a0fca9d14] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0921 22:16:03.369528  247121 system_pods.go:89] "storage-provisioner" [e343d022-0a1b-45e3-9ae2-e0056269776b] Running
	I0921 22:16:03.369541  247121 retry.go:31] will retry after 1.54277181s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0921 22:16:04.916795  247121 system_pods.go:86] 5 kube-system pods found
	I0921 22:16:04.916830  247121 system_pods.go:89] "coredns-5644d7b6d9-2mph9" [fe499b53-bc40-494d-8725-da0a3d32fa6d] Running
	I0921 22:16:04.916839  247121 system_pods.go:89] "kindnet-dkk77" [c801adc6-e95c-41dc-a02f-6bc5fd757f39] Running
	I0921 22:16:04.916845  247121 system_pods.go:89] "kube-proxy-l8c26" [a061610f-fbed-47b1-b399-43bb1247c1d7] Running
	I0921 22:16:04.916857  247121 system_pods.go:89] "metrics-server-7958775c-9r5h2" [d508d46d-acb3-4709-aef2-8e9a0fca9d14] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0921 22:16:04.916866  247121 system_pods.go:89] "storage-provisioner" [e343d022-0a1b-45e3-9ae2-e0056269776b] Running
	I0921 22:16:04.916883  247121 retry.go:31] will retry after 2.200241603s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0921 22:16:07.121387  247121 system_pods.go:86] 5 kube-system pods found
	I0921 22:16:07.121416  247121 system_pods.go:89] "coredns-5644d7b6d9-2mph9" [fe499b53-bc40-494d-8725-da0a3d32fa6d] Running
	I0921 22:16:07.121422  247121 system_pods.go:89] "kindnet-dkk77" [c801adc6-e95c-41dc-a02f-6bc5fd757f39] Running
	I0921 22:16:07.121427  247121 system_pods.go:89] "kube-proxy-l8c26" [a061610f-fbed-47b1-b399-43bb1247c1d7] Running
	I0921 22:16:07.121433  247121 system_pods.go:89] "metrics-server-7958775c-9r5h2" [d508d46d-acb3-4709-aef2-8e9a0fca9d14] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0921 22:16:07.121438  247121 system_pods.go:89] "storage-provisioner" [e343d022-0a1b-45e3-9ae2-e0056269776b] Running
	I0921 22:16:07.121451  247121 retry.go:31] will retry after 2.087459713s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0921 22:16:09.214157  247121 system_pods.go:86] 5 kube-system pods found
	I0921 22:16:09.214195  247121 system_pods.go:89] "coredns-5644d7b6d9-2mph9" [fe499b53-bc40-494d-8725-da0a3d32fa6d] Running
	I0921 22:16:09.214201  247121 system_pods.go:89] "kindnet-dkk77" [c801adc6-e95c-41dc-a02f-6bc5fd757f39] Running
	I0921 22:16:09.214206  247121 system_pods.go:89] "kube-proxy-l8c26" [a061610f-fbed-47b1-b399-43bb1247c1d7] Running
	I0921 22:16:09.214224  247121 system_pods.go:89] "metrics-server-7958775c-9r5h2" [d508d46d-acb3-4709-aef2-8e9a0fca9d14] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0921 22:16:09.214230  247121 system_pods.go:89] "storage-provisioner" [e343d022-0a1b-45e3-9ae2-e0056269776b] Running
	I0921 22:16:09.214244  247121 retry.go:31] will retry after 2.615099305s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0921 22:16:11.833901  247121 system_pods.go:86] 5 kube-system pods found
	I0921 22:16:11.833933  247121 system_pods.go:89] "coredns-5644d7b6d9-2mph9" [fe499b53-bc40-494d-8725-da0a3d32fa6d] Running
	I0921 22:16:11.833939  247121 system_pods.go:89] "kindnet-dkk77" [c801adc6-e95c-41dc-a02f-6bc5fd757f39] Running
	I0921 22:16:11.833945  247121 system_pods.go:89] "kube-proxy-l8c26" [a061610f-fbed-47b1-b399-43bb1247c1d7] Running
	I0921 22:16:11.833952  247121 system_pods.go:89] "metrics-server-7958775c-9r5h2" [d508d46d-acb3-4709-aef2-8e9a0fca9d14] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0921 22:16:11.833958  247121 system_pods.go:89] "storage-provisioner" [e343d022-0a1b-45e3-9ae2-e0056269776b] Running
	I0921 22:16:11.833972  247121 retry.go:31] will retry after 4.097406471s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0921 22:16:15.934944  247121 system_pods.go:86] 5 kube-system pods found
	I0921 22:16:15.934971  247121 system_pods.go:89] "coredns-5644d7b6d9-2mph9" [fe499b53-bc40-494d-8725-da0a3d32fa6d] Running
	I0921 22:16:15.934977  247121 system_pods.go:89] "kindnet-dkk77" [c801adc6-e95c-41dc-a02f-6bc5fd757f39] Running
	I0921 22:16:15.934981  247121 system_pods.go:89] "kube-proxy-l8c26" [a061610f-fbed-47b1-b399-43bb1247c1d7] Running
	I0921 22:16:15.934989  247121 system_pods.go:89] "metrics-server-7958775c-9r5h2" [d508d46d-acb3-4709-aef2-8e9a0fca9d14] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0921 22:16:15.934995  247121 system_pods.go:89] "storage-provisioner" [e343d022-0a1b-45e3-9ae2-e0056269776b] Running
	I0921 22:16:15.935009  247121 retry.go:31] will retry after 3.880319712s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0921 22:16:19.819176  247121 system_pods.go:86] 5 kube-system pods found
	I0921 22:16:19.819201  247121 system_pods.go:89] "coredns-5644d7b6d9-2mph9" [fe499b53-bc40-494d-8725-da0a3d32fa6d] Running
	I0921 22:16:19.819208  247121 system_pods.go:89] "kindnet-dkk77" [c801adc6-e95c-41dc-a02f-6bc5fd757f39] Running
	I0921 22:16:19.819213  247121 system_pods.go:89] "kube-proxy-l8c26" [a061610f-fbed-47b1-b399-43bb1247c1d7] Running
	I0921 22:16:19.819220  247121 system_pods.go:89] "metrics-server-7958775c-9r5h2" [d508d46d-acb3-4709-aef2-8e9a0fca9d14] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0921 22:16:19.819233  247121 system_pods.go:89] "storage-provisioner" [e343d022-0a1b-45e3-9ae2-e0056269776b] Running
	I0921 22:16:19.819246  247121 retry.go:31] will retry after 6.722686426s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0921 22:16:26.546193  247121 system_pods.go:86] 5 kube-system pods found
	I0921 22:16:26.546219  247121 system_pods.go:89] "coredns-5644d7b6d9-2mph9" [fe499b53-bc40-494d-8725-da0a3d32fa6d] Running
	I0921 22:16:26.546231  247121 system_pods.go:89] "kindnet-dkk77" [c801adc6-e95c-41dc-a02f-6bc5fd757f39] Running
	I0921 22:16:26.546235  247121 system_pods.go:89] "kube-proxy-l8c26" [a061610f-fbed-47b1-b399-43bb1247c1d7] Running
	I0921 22:16:26.546242  247121 system_pods.go:89] "metrics-server-7958775c-9r5h2" [d508d46d-acb3-4709-aef2-8e9a0fca9d14] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0921 22:16:26.546249  247121 system_pods.go:89] "storage-provisioner" [e343d022-0a1b-45e3-9ae2-e0056269776b] Running
	I0921 22:16:26.546262  247121 retry.go:31] will retry after 7.804314206s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0921 22:16:34.355827  247121 system_pods.go:86] 5 kube-system pods found
	I0921 22:16:34.355853  247121 system_pods.go:89] "coredns-5644d7b6d9-2mph9" [fe499b53-bc40-494d-8725-da0a3d32fa6d] Running
	I0921 22:16:34.355859  247121 system_pods.go:89] "kindnet-dkk77" [c801adc6-e95c-41dc-a02f-6bc5fd757f39] Running
	I0921 22:16:34.355863  247121 system_pods.go:89] "kube-proxy-l8c26" [a061610f-fbed-47b1-b399-43bb1247c1d7] Running
	I0921 22:16:34.355870  247121 system_pods.go:89] "metrics-server-7958775c-9r5h2" [d508d46d-acb3-4709-aef2-8e9a0fca9d14] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0921 22:16:34.355875  247121 system_pods.go:89] "storage-provisioner" [e343d022-0a1b-45e3-9ae2-e0056269776b] Running
	I0921 22:16:34.355888  247121 retry.go:31] will retry after 8.98756758s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0921 22:16:43.348534  247121 system_pods.go:86] 5 kube-system pods found
	I0921 22:16:43.348565  247121 system_pods.go:89] "coredns-5644d7b6d9-2mph9" [fe499b53-bc40-494d-8725-da0a3d32fa6d] Running
	I0921 22:16:43.348570  247121 system_pods.go:89] "kindnet-dkk77" [c801adc6-e95c-41dc-a02f-6bc5fd757f39] Running
	I0921 22:16:43.348575  247121 system_pods.go:89] "kube-proxy-l8c26" [a061610f-fbed-47b1-b399-43bb1247c1d7] Running
	I0921 22:16:43.348585  247121 system_pods.go:89] "metrics-server-7958775c-9r5h2" [d508d46d-acb3-4709-aef2-8e9a0fca9d14] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0921 22:16:43.348593  247121 system_pods.go:89] "storage-provisioner" [e343d022-0a1b-45e3-9ae2-e0056269776b] Running
	I0921 22:16:43.348617  247121 retry.go:31] will retry after 8.483786333s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0921 22:16:51.837717  247121 system_pods.go:86] 7 kube-system pods found
	I0921 22:16:51.837745  247121 system_pods.go:89] "coredns-5644d7b6d9-2mph9" [fe499b53-bc40-494d-8725-da0a3d32fa6d] Running
	I0921 22:16:51.837753  247121 system_pods.go:89] "etcd-old-k8s-version-20220921220722-10174" [d5bea577-44c9-4390-b056-93bd503be364] Pending
	I0921 22:16:51.837757  247121 system_pods.go:89] "kindnet-dkk77" [c801adc6-e95c-41dc-a02f-6bc5fd757f39] Running
	I0921 22:16:51.837763  247121 system_pods.go:89] "kube-apiserver-old-k8s-version-20220921220722-10174" [4a550b4b-7441-488b-bb35-2ddebcff7112] Pending
	I0921 22:16:51.837767  247121 system_pods.go:89] "kube-proxy-l8c26" [a061610f-fbed-47b1-b399-43bb1247c1d7] Running
	I0921 22:16:51.837774  247121 system_pods.go:89] "metrics-server-7958775c-9r5h2" [d508d46d-acb3-4709-aef2-8e9a0fca9d14] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0921 22:16:51.837780  247121 system_pods.go:89] "storage-provisioner" [e343d022-0a1b-45e3-9ae2-e0056269776b] Running
	I0921 22:16:51.837793  247121 retry.go:31] will retry after 11.506963942s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0921 22:17:03.352770  247121 system_pods.go:86] 9 kube-system pods found
	I0921 22:17:03.352804  247121 system_pods.go:89] "coredns-5644d7b6d9-2mph9" [fe499b53-bc40-494d-8725-da0a3d32fa6d] Running
	I0921 22:17:03.352814  247121 system_pods.go:89] "etcd-old-k8s-version-20220921220722-10174" [d5bea577-44c9-4390-b056-93bd503be364] Running
	I0921 22:17:03.352823  247121 system_pods.go:89] "kindnet-dkk77" [c801adc6-e95c-41dc-a02f-6bc5fd757f39] Running
	I0921 22:17:03.352830  247121 system_pods.go:89] "kube-apiserver-old-k8s-version-20220921220722-10174" [4a550b4b-7441-488b-bb35-2ddebcff7112] Running
	I0921 22:17:03.352835  247121 system_pods.go:89] "kube-controller-manager-old-k8s-version-20220921220722-10174" [a7edd356-0ce2-454c-8b56-896c383e2041] Running
	I0921 22:17:03.352840  247121 system_pods.go:89] "kube-proxy-l8c26" [a061610f-fbed-47b1-b399-43bb1247c1d7] Running
	I0921 22:17:03.352852  247121 system_pods.go:89] "kube-scheduler-old-k8s-version-20220921220722-10174" [e14b574f-cd47-48a3-8dd1-9cd5ea1c01de] Running
	I0921 22:17:03.352867  247121 system_pods.go:89] "metrics-server-7958775c-9r5h2" [d508d46d-acb3-4709-aef2-8e9a0fca9d14] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0921 22:17:03.352880  247121 system_pods.go:89] "storage-provisioner" [e343d022-0a1b-45e3-9ae2-e0056269776b] Running
	I0921 22:17:03.352893  247121 system_pods.go:126] duration metric: took 1m4.674426519s to wait for k8s-apps to be running ...
	I0921 22:17:03.352907  247121 system_svc.go:44] waiting for kubelet service to be running ....
	I0921 22:17:03.352965  247121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0921 22:17:03.362632  247121 system_svc.go:56] duration metric: took 9.717678ms WaitForService to wait for kubelet.
	I0921 22:17:03.362654  247121 kubeadm.go:573] duration metric: took 1m13.975376547s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0921 22:17:03.362675  247121 node_conditions.go:102] verifying NodePressure condition ...
	I0921 22:17:03.365058  247121 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0921 22:17:03.365078  247121 node_conditions.go:123] node cpu capacity is 8
	I0921 22:17:03.365089  247121 node_conditions.go:105] duration metric: took 2.409964ms to run NodePressure ...
	I0921 22:17:03.365099  247121 start.go:216] waiting for startup goroutines ...
	I0921 22:17:03.409136  247121 start.go:506] kubectl: 1.25.2, cluster: 1.16.0 (minor skew: 9)
	I0921 22:17:03.411009  247121 out.go:177] 
	W0921 22:17:03.412567  247121 out.go:239] ! /usr/local/bin/kubectl is version 1.25.2, which may have incompatibilites with Kubernetes 1.16.0.
	I0921 22:17:03.413973  247121 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0921 22:17:03.415507  247121 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-20220921220722-10174" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	35601481c1b92       d921cee849482       3 minutes ago       Exited              kindnet-cni               3                   01f72770699db
	2c132c99660ac       1c7d8c51823b5       12 minutes ago      Running             kube-proxy                0                   165337b73f95e
	4c0ef4a5b3254       97801f8394908       12 minutes ago      Running             kube-apiserver            0                   21fbb3d04e7ee
	6dc0cbf3dcda3       dbfceb93c69b6       12 minutes ago      Running             kube-controller-manager   0                   c9f16d90611ed
	07e2b5e608591       a8a176a5d5d69       12 minutes ago      Running             etcd                      0                   c5347b7c3fd3f
	50596ff38ce68       ca0ea1ee3cfd3       12 minutes ago      Running             kube-scheduler            0                   a1f596d0a7b61
	
	* 
	* ==> containerd <==
	* -- Logs begin at Wed 2022-09-21 22:04:48 UTC, end at Wed 2022-09-21 22:17:19 UTC. --
	Sep 21 22:10:37 embed-certs-20220921220439-10174 containerd[514]: time="2022-09-21T22:10:37.231676528Z" level=warning msg="cleaning up after shim disconnected" id=173ec9492ce0e14e57dc9b776c742ca7ea6b204dcfa3220d44a992a1b4db2cdc namespace=k8s.io
	Sep 21 22:10:37 embed-certs-20220921220439-10174 containerd[514]: time="2022-09-21T22:10:37.231690505Z" level=info msg="cleaning up dead shim"
	Sep 21 22:10:37 embed-certs-20220921220439-10174 containerd[514]: time="2022-09-21T22:10:37.241912984Z" level=warning msg="cleanup warnings time=\"2022-09-21T22:10:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2469 runtime=io.containerd.runc.v2\n"
	Sep 21 22:10:37 embed-certs-20220921220439-10174 containerd[514]: time="2022-09-21T22:10:37.883464470Z" level=info msg="RemoveContainer for \"ca78ef37b396f92bf9a289e39419479866b6196ad11a7c737fadf39a7a1d54a5\""
	Sep 21 22:10:37 embed-certs-20220921220439-10174 containerd[514]: time="2022-09-21T22:10:37.888812715Z" level=info msg="RemoveContainer for \"ca78ef37b396f92bf9a289e39419479866b6196ad11a7c737fadf39a7a1d54a5\" returns successfully"
	Sep 21 22:10:51 embed-certs-20220921220439-10174 containerd[514]: time="2022-09-21T22:10:51.209849333Z" level=info msg="CreateContainer within sandbox \"01f72770699dba9b47fb1159d7817070ac2f2241cf613a4dde6d31da7bd3e606\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:2,}"
	Sep 21 22:10:51 embed-certs-20220921220439-10174 containerd[514]: time="2022-09-21T22:10:51.224528742Z" level=info msg="CreateContainer within sandbox \"01f72770699dba9b47fb1159d7817070ac2f2241cf613a4dde6d31da7bd3e606\" for &ContainerMetadata{Name:kindnet-cni,Attempt:2,} returns container id \"7671106b9c1f271ff1918ad3a09bd60d1267891c729e2eac7362ec4476185b88\""
	Sep 21 22:10:51 embed-certs-20220921220439-10174 containerd[514]: time="2022-09-21T22:10:51.225025241Z" level=info msg="StartContainer for \"7671106b9c1f271ff1918ad3a09bd60d1267891c729e2eac7362ec4476185b88\""
	Sep 21 22:10:51 embed-certs-20220921220439-10174 containerd[514]: time="2022-09-21T22:10:51.379970418Z" level=info msg="StartContainer for \"7671106b9c1f271ff1918ad3a09bd60d1267891c729e2eac7362ec4476185b88\" returns successfully"
	Sep 21 22:13:31 embed-certs-20220921220439-10174 containerd[514]: time="2022-09-21T22:13:31.828802894Z" level=info msg="shim disconnected" id=7671106b9c1f271ff1918ad3a09bd60d1267891c729e2eac7362ec4476185b88
	Sep 21 22:13:31 embed-certs-20220921220439-10174 containerd[514]: time="2022-09-21T22:13:31.828874359Z" level=warning msg="cleaning up after shim disconnected" id=7671106b9c1f271ff1918ad3a09bd60d1267891c729e2eac7362ec4476185b88 namespace=k8s.io
	Sep 21 22:13:31 embed-certs-20220921220439-10174 containerd[514]: time="2022-09-21T22:13:31.828890498Z" level=info msg="cleaning up dead shim"
	Sep 21 22:13:31 embed-certs-20220921220439-10174 containerd[514]: time="2022-09-21T22:13:31.838864673Z" level=warning msg="cleanup warnings time=\"2022-09-21T22:13:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2582 runtime=io.containerd.runc.v2\n"
	Sep 21 22:13:32 embed-certs-20220921220439-10174 containerd[514]: time="2022-09-21T22:13:32.206042138Z" level=info msg="RemoveContainer for \"173ec9492ce0e14e57dc9b776c742ca7ea6b204dcfa3220d44a992a1b4db2cdc\""
	Sep 21 22:13:32 embed-certs-20220921220439-10174 containerd[514]: time="2022-09-21T22:13:32.212397684Z" level=info msg="RemoveContainer for \"173ec9492ce0e14e57dc9b776c742ca7ea6b204dcfa3220d44a992a1b4db2cdc\" returns successfully"
	Sep 21 22:13:59 embed-certs-20220921220439-10174 containerd[514]: time="2022-09-21T22:13:59.210603544Z" level=info msg="CreateContainer within sandbox \"01f72770699dba9b47fb1159d7817070ac2f2241cf613a4dde6d31da7bd3e606\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:3,}"
	Sep 21 22:13:59 embed-certs-20220921220439-10174 containerd[514]: time="2022-09-21T22:13:59.224586549Z" level=info msg="CreateContainer within sandbox \"01f72770699dba9b47fb1159d7817070ac2f2241cf613a4dde6d31da7bd3e606\" for &ContainerMetadata{Name:kindnet-cni,Attempt:3,} returns container id \"35601481c1b925f8785d0730679803d265cbc261ddf4ca9086c08ec08ce8d10c\""
	Sep 21 22:13:59 embed-certs-20220921220439-10174 containerd[514]: time="2022-09-21T22:13:59.225181730Z" level=info msg="StartContainer for \"35601481c1b925f8785d0730679803d265cbc261ddf4ca9086c08ec08ce8d10c\""
	Sep 21 22:13:59 embed-certs-20220921220439-10174 containerd[514]: time="2022-09-21T22:13:59.289797386Z" level=info msg="StartContainer for \"35601481c1b925f8785d0730679803d265cbc261ddf4ca9086c08ec08ce8d10c\" returns successfully"
	Sep 21 22:16:39 embed-certs-20220921220439-10174 containerd[514]: time="2022-09-21T22:16:39.734742213Z" level=info msg="shim disconnected" id=35601481c1b925f8785d0730679803d265cbc261ddf4ca9086c08ec08ce8d10c
	Sep 21 22:16:39 embed-certs-20220921220439-10174 containerd[514]: time="2022-09-21T22:16:39.734809818Z" level=warning msg="cleaning up after shim disconnected" id=35601481c1b925f8785d0730679803d265cbc261ddf4ca9086c08ec08ce8d10c namespace=k8s.io
	Sep 21 22:16:39 embed-certs-20220921220439-10174 containerd[514]: time="2022-09-21T22:16:39.734826450Z" level=info msg="cleaning up dead shim"
	Sep 21 22:16:39 embed-certs-20220921220439-10174 containerd[514]: time="2022-09-21T22:16:39.745551917Z" level=warning msg="cleanup warnings time=\"2022-09-21T22:16:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2697 runtime=io.containerd.runc.v2\n"
	Sep 21 22:16:40 embed-certs-20220921220439-10174 containerd[514]: time="2022-09-21T22:16:40.549102680Z" level=info msg="RemoveContainer for \"7671106b9c1f271ff1918ad3a09bd60d1267891c729e2eac7362ec4476185b88\""
	Sep 21 22:16:40 embed-certs-20220921220439-10174 containerd[514]: time="2022-09-21T22:16:40.553777114Z" level=info msg="RemoveContainer for \"7671106b9c1f271ff1918ad3a09bd60d1267891c729e2eac7362ec4476185b88\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-20220921220439-10174
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-20220921220439-10174
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=937c68716dfaac5b5ffa3b6655158d5d3472b8c4
	                    minikube.k8s.io/name=embed-certs-20220921220439-10174
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_09_21T22_05_03_0700
	                    minikube.k8s.io/version=v1.27.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 21 Sep 2022 22:04:58 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-20220921220439-10174
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 21 Sep 2022 22:17:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 21 Sep 2022 22:15:24 +0000   Wed, 21 Sep 2022 22:04:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 21 Sep 2022 22:15:24 +0000   Wed, 21 Sep 2022 22:04:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 21 Sep 2022 22:15:24 +0000   Wed, 21 Sep 2022 22:04:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 21 Sep 2022 22:15:24 +0000   Wed, 21 Sep 2022 22:04:56 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    embed-certs-20220921220439-10174
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871748Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871748Ki
	  pods:               110
	System Info:
	  Machine ID:                 40026c506a9948afaae3b0fda0a61c83
	  System UUID:                39299add-007b-4517-8e1f-4d420ff2375f
	  Boot ID:                    878f3e16-9143-42f3-b848-1a740c7f26bf
	  Kernel Version:             5.15.0-1017-gcp
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.6.8
	  Kubelet Version:            v1.25.2
	  Kube-Proxy Version:         v1.25.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                        ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-embed-certs-20220921220439-10174                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-mqr9d                                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      12m
	  kube-system                 kube-apiserver-embed-certs-20220921220439-10174             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-embed-certs-20220921220439-10174    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-s7c85                                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-embed-certs-20220921220439-10174             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  NodeHasSufficientMemory  12m (x5 over 12m)  kubelet          Node embed-certs-20220921220439-10174 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x5 over 12m)  kubelet          Node embed-certs-20220921220439-10174 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x4 over 12m)  kubelet          Node embed-certs-20220921220439-10174 status is now: NodeHasSufficientPID
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node embed-certs-20220921220439-10174 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node embed-certs-20220921220439-10174 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet          Node embed-certs-20220921220439-10174 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           12m                node-controller  Node embed-certs-20220921220439-10174 event: Registered Node embed-certs-20220921220439-10174 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000005] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +2.959863] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.003881] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.023897] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[Sep21 22:10] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.005087] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[Sep21 22:11] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +2.967845] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.031851] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.027935] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +2.943864] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.019893] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.023889] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	
	* 
	* ==> etcd [07e2b5e608591e85913066c0986a5bd5ea1bf1e68a095ae9fea95c89af2a5837] <==
	* {"level":"info","ts":"2022-09-21T22:04:55.692Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-09-21T22:04:56.579Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 1"}
	{"level":"info","ts":"2022-09-21T22:04:56.580Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-09-21T22:04:56.580Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 1"}
	{"level":"info","ts":"2022-09-21T22:04:56.580Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 2"}
	{"level":"info","ts":"2022-09-21T22:04:56.580Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2022-09-21T22:04:56.580Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 2"}
	{"level":"info","ts":"2022-09-21T22:04:56.580Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2022-09-21T22:04:56.580Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-21T22:04:56.581Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-21T22:04:56.581Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-21T22:04:56.581Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-21T22:04:56.581Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-09-21T22:04:56.581Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:embed-certs-20220921220439-10174 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2022-09-21T22:04:56.581Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-09-21T22:04:56.581Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-09-21T22:04:56.581Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-09-21T22:04:56.583Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-09-21T22:04:56.583Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	{"level":"warn","ts":"2022-09-21T22:09:10.633Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"111.024913ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/embed-certs-20220921220439-10174\" ","response":"range_response_count:1 size:4776"}
	{"level":"warn","ts":"2022-09-21T22:09:10.633Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"185.059944ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/default/kubernetes\" ","response":"range_response_count:1 size:420"}
	{"level":"info","ts":"2022-09-21T22:09:10.633Z","caller":"traceutil/trace.go:171","msg":"trace[1448235816] range","detail":"{range_begin:/registry/minions/embed-certs-20220921220439-10174; range_end:; response_count:1; response_revision:435; }","duration":"111.159312ms","start":"2022-09-21T22:09:10.521Z","end":"2022-09-21T22:09:10.633Z","steps":["trace[1448235816] 'agreement among raft nodes before linearized reading'  (duration: 14.689355ms)","trace[1448235816] 'range keys from in-memory index tree'  (duration: 96.284607ms)"],"step_count":2}
	{"level":"info","ts":"2022-09-21T22:09:10.633Z","caller":"traceutil/trace.go:171","msg":"trace[467199965] range","detail":"{range_begin:/registry/services/endpoints/default/kubernetes; range_end:; response_count:1; response_revision:435; }","duration":"185.216087ms","start":"2022-09-21T22:09:10.447Z","end":"2022-09-21T22:09:10.633Z","steps":["trace[467199965] 'agreement among raft nodes before linearized reading'  (duration: 88.71972ms)","trace[467199965] 'range keys from in-memory index tree'  (duration: 96.312032ms)"],"step_count":2}
	{"level":"info","ts":"2022-09-21T22:14:56.798Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":447}
	{"level":"info","ts":"2022-09-21T22:14:56.799Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":447,"took":"454.164µs"}
	
	* 
	* ==> kernel <==
	*  22:17:19 up 59 min,  0 users,  load average: 0.51, 1.78, 2.14
	Linux embed-certs-20220921220439-10174 5.15.0-1017-gcp #23~20.04.2-Ubuntu SMP Wed Aug 17 02:46:40 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kube-apiserver [4c0ef4a5b32546e86e84aa28e2b53370eb4c462d47208c2f4053d8a94da4e5d0] <==
	* I0921 22:04:58.970037       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0921 22:04:58.970094       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0921 22:04:58.970121       1 apf_controller.go:305] Running API Priority and Fairness config worker
	I0921 22:04:58.970138       1 cache.go:39] Caches are synced for autoregister controller
	I0921 22:04:58.970794       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0921 22:04:58.975648       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0921 22:04:58.981933       1 controller.go:616] quota admission added evaluator for: namespaces
	I0921 22:04:58.982835       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0921 22:04:59.641473       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0921 22:04:59.874406       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0921 22:04:59.877433       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0921 22:04:59.877457       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0921 22:05:00.281431       1 controller.go:616] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0921 22:05:00.320037       1 controller.go:616] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0921 22:05:00.423949       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0921 22:05:00.430489       1 lease.go:250] Resetting endpoints for master service "kubernetes" to [192.168.67.2]
	I0921 22:05:00.431476       1 controller.go:616] quota admission added evaluator for: endpoints
	I0921 22:05:00.435394       1 controller.go:616] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0921 22:05:00.922548       1 controller.go:616] quota admission added evaluator for: serviceaccounts
	I0921 22:05:02.028149       1 controller.go:616] quota admission added evaluator for: deployments.apps
	I0921 22:05:02.035424       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0921 22:05:02.043930       1 controller.go:616] quota admission added evaluator for: daemonsets.apps
	I0921 22:05:02.121398       1 controller.go:616] quota admission added evaluator for: leases.coordination.k8s.io
	I0921 22:05:14.537469       1 controller.go:616] quota admission added evaluator for: replicasets.apps
	I0921 22:05:14.636887       1 controller.go:616] quota admission added evaluator for: controllerrevisions.apps
	
	* 
	* ==> kube-controller-manager [6dc0cbf3dcda3fe8512f7ac309f5980e6a0d33dedf1aafdf4d79890ef21016e9] <==
	* I0921 22:05:13.874626       1 shared_informer.go:262] Caches are synced for persistent volume
	I0921 22:05:13.886424       1 shared_informer.go:262] Caches are synced for resource quota
	I0921 22:05:13.917195       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
	I0921 22:05:13.918384       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0921 22:05:13.918417       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I0921 22:05:13.918466       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
	I0921 22:05:13.930142       1 shared_informer.go:262] Caches are synced for taint
	I0921 22:05:13.930254       1 taint_manager.go:204] "Starting NoExecuteTaintManager"
	I0921 22:05:13.930304       1 taint_manager.go:209] "Sending events to api server"
	I0921 22:05:13.930264       1 node_lifecycle_controller.go:1443] Initializing eviction metric for zone: 
	I0921 22:05:13.930331       1 event.go:294] "Event occurred" object="embed-certs-20220921220439-10174" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node embed-certs-20220921220439-10174 event: Registered Node embed-certs-20220921220439-10174 in Controller"
	W0921 22:05:13.930431       1 node_lifecycle_controller.go:1058] Missing timestamp for Node embed-certs-20220921220439-10174. Assuming now as a timestamp.
	I0921 22:05:13.930465       1 shared_informer.go:262] Caches are synced for daemon sets
	I0921 22:05:13.930488       1 node_lifecycle_controller.go:1209] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0921 22:05:13.983455       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I0921 22:05:14.306350       1 shared_informer.go:262] Caches are synced for garbage collector
	I0921 22:05:14.321838       1 shared_informer.go:262] Caches are synced for garbage collector
	I0921 22:05:14.321866       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0921 22:05:14.539394       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-565d847f94 to 2"
	I0921 22:05:14.642432       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-s7c85"
	I0921 22:05:14.643931       1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-mqr9d"
	I0921 22:05:14.793697       1 event.go:294] "Event occurred" object="kube-system/coredns-565d847f94" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-565d847f94-7xblt"
	I0921 22:05:14.797759       1 event.go:294] "Event occurred" object="kube-system/coredns-565d847f94" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-565d847f94-qn9gp"
	I0921 22:05:14.910189       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-565d847f94 to 1 from 2"
	I0921 22:05:14.921777       1 event.go:294] "Event occurred" object="kube-system/coredns-565d847f94" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-565d847f94-7xblt"
	
	* 
	* ==> kube-proxy [2c132c99660ac3b6987754acaccbc87f631bc9ffc4dade2b77ad96eef8d04334] <==
	* I0921 22:05:15.218738       1 node.go:163] Successfully retrieved node IP: 192.168.67.2
	I0921 22:05:15.218815       1 server_others.go:138] "Detected node IP" address="192.168.67.2"
	I0921 22:05:15.218851       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0921 22:05:15.238164       1 server_others.go:206] "Using iptables Proxier"
	I0921 22:05:15.238196       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0921 22:05:15.238214       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0921 22:05:15.238239       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0921 22:05:15.238267       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0921 22:05:15.238431       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0921 22:05:15.239025       1 server.go:661] "Version info" version="v1.25.2"
	I0921 22:05:15.239051       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0921 22:05:15.240122       1 config.go:317] "Starting service config controller"
	I0921 22:05:15.240165       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0921 22:05:15.240172       1 config.go:226] "Starting endpoint slice config controller"
	I0921 22:05:15.240184       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0921 22:05:15.240219       1 config.go:444] "Starting node config controller"
	I0921 22:05:15.240262       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0921 22:05:15.340574       1 shared_informer.go:262] Caches are synced for node config
	I0921 22:05:15.340602       1 shared_informer.go:262] Caches are synced for service config
	I0921 22:05:15.340643       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [50596ff38ce686be7705ce5777cdda9e90065d702d371e9f4371f62b19f49c34] <==
	* E0921 22:04:58.990827       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0921 22:04:58.990832       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0921 22:04:58.990783       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0921 22:04:58.990919       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0921 22:04:58.990919       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0921 22:04:58.990948       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0921 22:04:58.990679       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0921 22:04:58.990969       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0921 22:04:58.990956       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0921 22:04:58.991007       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0921 22:04:59.818146       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0921 22:04:59.818183       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0921 22:04:59.872480       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0921 22:04:59.872523       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0921 22:05:00.056545       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0921 22:05:00.056587       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0921 22:05:00.076767       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0921 22:05:00.076819       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0921 22:05:00.086747       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0921 22:05:00.086784       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0921 22:05:00.106863       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0921 22:05:00.106895       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0921 22:05:00.153313       1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0921 22:05:00.153358       1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0921 22:05:02.886016       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2022-09-21 22:04:48 UTC, end at Wed 2022-09-21 22:17:19 UTC. --
	Sep 21 22:15:52 embed-certs-20220921220439-10174 kubelet[1309]: E0921 22:15:52.561330    1309 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:15:57 embed-certs-20220921220439-10174 kubelet[1309]: E0921 22:15:57.562339    1309 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:16:02 embed-certs-20220921220439-10174 kubelet[1309]: E0921 22:16:02.563636    1309 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:16:07 embed-certs-20220921220439-10174 kubelet[1309]: E0921 22:16:07.565144    1309 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:16:12 embed-certs-20220921220439-10174 kubelet[1309]: E0921 22:16:12.566190    1309 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:16:17 embed-certs-20220921220439-10174 kubelet[1309]: E0921 22:16:17.567322    1309 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:16:22 embed-certs-20220921220439-10174 kubelet[1309]: E0921 22:16:22.568995    1309 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:16:27 embed-certs-20220921220439-10174 kubelet[1309]: E0921 22:16:27.570744    1309 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:16:32 embed-certs-20220921220439-10174 kubelet[1309]: E0921 22:16:32.571705    1309 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:16:37 embed-certs-20220921220439-10174 kubelet[1309]: E0921 22:16:37.573156    1309 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:16:40 embed-certs-20220921220439-10174 kubelet[1309]: I0921 22:16:40.547821    1309 scope.go:115] "RemoveContainer" containerID="7671106b9c1f271ff1918ad3a09bd60d1267891c729e2eac7362ec4476185b88"
	Sep 21 22:16:40 embed-certs-20220921220439-10174 kubelet[1309]: I0921 22:16:40.548127    1309 scope.go:115] "RemoveContainer" containerID="35601481c1b925f8785d0730679803d265cbc261ddf4ca9086c08ec08ce8d10c"
	Sep 21 22:16:40 embed-certs-20220921220439-10174 kubelet[1309]: E0921 22:16:40.548414    1309 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-mqr9d_kube-system(1dcc030c-e4fc-498d-a309-94f66d79cd24)\"" pod="kube-system/kindnet-mqr9d" podUID=1dcc030c-e4fc-498d-a309-94f66d79cd24
	Sep 21 22:16:42 embed-certs-20220921220439-10174 kubelet[1309]: E0921 22:16:42.574410    1309 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:16:47 embed-certs-20220921220439-10174 kubelet[1309]: E0921 22:16:47.575882    1309 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:16:52 embed-certs-20220921220439-10174 kubelet[1309]: E0921 22:16:52.576614    1309 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:16:53 embed-certs-20220921220439-10174 kubelet[1309]: I0921 22:16:53.207362    1309 scope.go:115] "RemoveContainer" containerID="35601481c1b925f8785d0730679803d265cbc261ddf4ca9086c08ec08ce8d10c"
	Sep 21 22:16:53 embed-certs-20220921220439-10174 kubelet[1309]: E0921 22:16:53.207665    1309 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-mqr9d_kube-system(1dcc030c-e4fc-498d-a309-94f66d79cd24)\"" pod="kube-system/kindnet-mqr9d" podUID=1dcc030c-e4fc-498d-a309-94f66d79cd24
	Sep 21 22:16:57 embed-certs-20220921220439-10174 kubelet[1309]: E0921 22:16:57.577345    1309 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:17:02 embed-certs-20220921220439-10174 kubelet[1309]: E0921 22:17:02.578786    1309 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:17:07 embed-certs-20220921220439-10174 kubelet[1309]: E0921 22:17:07.580160    1309 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:17:08 embed-certs-20220921220439-10174 kubelet[1309]: I0921 22:17:08.208399    1309 scope.go:115] "RemoveContainer" containerID="35601481c1b925f8785d0730679803d265cbc261ddf4ca9086c08ec08ce8d10c"
	Sep 21 22:17:08 embed-certs-20220921220439-10174 kubelet[1309]: E0921 22:17:08.208814    1309 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-mqr9d_kube-system(1dcc030c-e4fc-498d-a309-94f66d79cd24)\"" pod="kube-system/kindnet-mqr9d" podUID=1dcc030c-e4fc-498d-a309-94f66d79cd24
	Sep 21 22:17:12 embed-certs-20220921220439-10174 kubelet[1309]: E0921 22:17:12.581846    1309 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:17:17 embed-certs-20220921220439-10174 kubelet[1309]: E0921 22:17:17.583299    1309 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20220921220439-10174 -n embed-certs-20220921220439-10174

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/DeployApp
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-20220921220439-10174 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: busybox coredns-565d847f94-qn9gp storage-provisioner
helpers_test.go:272: ======> post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context embed-certs-20220921220439-10174 describe pod busybox coredns-565d847f94-qn9gp storage-provisioner
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context embed-certs-20220921220439-10174 describe pod busybox coredns-565d847f94-qn9gp storage-provisioner: exit status 1 (74.471197ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wf26r (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-wf26r:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                   From               Message
	  ----     ------            ----                  ----               -------
	  Warning  FailedScheduling  2m47s (x2 over 8m3s)  default-scheduler  0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "coredns-565d847f94-qn9gp" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context embed-certs-20220921220439-10174 describe pod busybox coredns-565d847f94-qn9gp storage-provisioner: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220921220439-10174
helpers_test.go:235: (dbg) docker inspect embed-certs-20220921220439-10174:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0efc3a0310481f73b6cd48b12157774b33f818f56057de5f30c303eafdd6c31a",
	        "Created": "2022-09-21T22:04:47.451918435Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 229029,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-09-21T22:04:47.821915918Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f58fddaff4349397c9f51a6b73926a9b118af22b4ccb4492e84c74d0b59dcd4",
	        "ResolvConfPath": "/var/lib/docker/containers/0efc3a0310481f73b6cd48b12157774b33f818f56057de5f30c303eafdd6c31a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0efc3a0310481f73b6cd48b12157774b33f818f56057de5f30c303eafdd6c31a/hostname",
	        "HostsPath": "/var/lib/docker/containers/0efc3a0310481f73b6cd48b12157774b33f818f56057de5f30c303eafdd6c31a/hosts",
	        "LogPath": "/var/lib/docker/containers/0efc3a0310481f73b6cd48b12157774b33f818f56057de5f30c303eafdd6c31a/0efc3a0310481f73b6cd48b12157774b33f818f56057de5f30c303eafdd6c31a-json.log",
	        "Name": "/embed-certs-20220921220439-10174",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "embed-certs-20220921220439-10174:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-20220921220439-10174",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/4c8238508dd48885d3bcd481ab9754fae874a05a7cd3b8c92540515918045fea-init/diff:/var/lib/docker/overlay2/e464f9a636cb97832b8aa5685163afb26b14d037782877c3831e0f261b6fb12b/diff:/var/lib/docker/overlay2/6f2f230003f0711dfcff3390931e14f9abd39be1cd2b674079cb6a0fc99dab7f/diff:/var/lib/docker/overlay2/a539506727a1471a83b57694b6923a137699d9d47fe00601ffa27c34d63d17c2/diff:/var/lib/docker/overlay2/7dc48fda616494e9959c05191521a1f5bdf6be0555a35fc1d7ce85b87a0522ad/diff:/var/lib/docker/overlay2/6a1c06da85f8c03c9712693ca8a75d804e7b9f135a6b92e3929385cc791b0b3a/diff:/var/lib/docker/overlay2/d178ee6cf0c027c7943422dc71b38eb93ee131daede423f6cf283424eaebc517/diff:/var/lib/docker/overlay2/f4c8d81a89934a29db99d9ed4b4542a8882465ab2b6419ba4a3fe09214d25d9e/diff:/var/lib/docker/overlay2/4411575e37772974371f716b7c4abb3053138ddcaf53d883eb3116b4c3a37f78/diff:/var/lib/docker/overlay2/a3f8db0c0d790efcede2f41e470d293e280b022ff0bbb46c8bc25f190e3146fc/diff:/var/lib/docker/overlay2/a5bfd3
703378d6d5b2622598a13b4b2eb9203659c1ffff9aa93944ef8d20d22c/diff:/var/lib/docker/overlay2/3c24c9c8a37c1a488d12209df991b3857ccbc448a3329e46a8d48fc28ef81b83/diff:/var/lib/docker/overlay2/199771963b33bc809e912a6d428c556897f0ba04db2268e08695cb7ce0ee3bad/diff:/var/lib/docker/overlay2/4bc13570e456a5d261fbaf68140f2e5b2ea10c6683623858e16ee3ed1b117ef7/diff:/var/lib/docker/overlay2/9831fdfd44eff7e3f4780d10f5480f090b388735bb188e7a1e93251b7769d8f3/diff:/var/lib/docker/overlay2/28126fe31ddfbf99d4588c5ac750503b9b6cee8f2e00781941cff5f4323d1c76/diff:/var/lib/docker/overlay2/173221e8ff5f6ec22870429dc8bb01272ade638fe47275d1daa1ce07952c8c8c/diff:/var/lib/docker/overlay2/53bc2e4fa25c7c694765cf4d57e59552b26eed732ba59b8d74b221ea0ada7479/diff:/var/lib/docker/overlay2/0175a1cb7e1c92247b6cbac270da49d811d3508e193c1c2e47bef8cb769a47bf/diff:/var/lib/docker/overlay2/1651afeb98cf83af553f3e0a4dcb564af84cb440147702fddc0133cdae08e879/diff:/var/lib/docker/overlay2/64c0563be8f8902fe0e9d207f839be98da0e23d6d839403a7e5498f0468e6b75/diff:/var/lib/d
ocker/overlay2/c9ce1f7c22e681e0e279ce7656144b4c75e2d24f7297acad69e621f2756b75e8/diff:/var/lib/docker/overlay2/469e6b8bbc078383af3dc64c254906774ab56a833c4de1dd39b46bfa27fee3b0/diff:/var/lib/docker/overlay2/797536b923ada4261904843a51188063c401089eae5fec971f1dfb3c21d000a8/diff:/var/lib/docker/overlay2/9d5cbab97d75347795fc5814806362514e12ffc998b48411ecf1d23f980badf6/diff:/var/lib/docker/overlay2/8681f41e3df379cfdc8fd52b2e5837512b8e36a64c87fb159548a876d16e691d/diff:/var/lib/docker/overlay2/78ea1917a0aca968f65d796cbc2ee95d7d190a700496b9e47b29884fe13b1bec/diff:/var/lib/docker/overlay2/c3c231a74b7992f1fdc04b623c10d7791eab4fd97c803ed2610d595097c682c2/diff:/var/lib/docker/overlay2/f7b38a2a87d3958318f3a4b244e33d8096cd001a8fcb916eec022b0679382900/diff:/var/lib/docker/overlay2/1107be3397c1dfc460decaf307982d427cc431beb7938fb0e78f4f7cbc0afd3b/diff:/var/lib/docker/overlay2/59319d0c4cc381deff6e51ba1cc718b7cfd4fc557e74b8e83bd228afca501c8c/diff:/var/lib/docker/overlay2/9d031408a1ab9825246b7bba81db2596cb92e70fbedc70fba9be1d25320
8529f/diff:/var/lib/docker/overlay2/5800b717c814431baf3cc17520b099e7e011915cdeb46ed6360ec2f831a926ec/diff:/var/lib/docker/overlay2/6660d8dfc6dbb7f41f871ff7ec7e0fdfdb2ca3480141cdd4cce92869cb5382bf/diff:/var/lib/docker/overlay2/525a566b79110ff18e78880f2652b8d9cfcfbdbf3272fedb650933fcbbf869bd/diff:/var/lib/docker/overlay2/a0730aa8e0952fc661048d3a7f986ff246a5b2a29ad4e8195970bb407e3cecfc/diff:/var/lib/docker/overlay2/f9d25765ae914f5f013b5f46292f03c39d82f675d2102904f5fabecf07189337/diff:/var/lib/docker/overlay2/b7b34d126964ff60bc0fb625da7343e53e8bb4e77d5e72e33257c28b3d395ca3/diff:/var/lib/docker/overlay2/8514d6cc3775ebdd86f683a78503419bc97346c0059ac0d48563d2edec9727f0/diff:/var/lib/docker/overlay2/dd6f7bead5718d42040b4ee90ce09fa15720d04f3c58e2964b1d5b241bf5b7ed/diff:/var/lib/docker/overlay2/1a8adeb0355a42e9284e241b527930f4f6b7108d459c10550d5d0c4451f9e924/diff:/var/lib/docker/overlay2/703991449e0070d93945e4435da56144ce54e461e877e0b084144663d46b10f6/diff:/var/lib/docker/overlay2/65855711cfa445d374537e6268fd1bea2f62ea
bf2f590842933254b4755050dd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4c8238508dd48885d3bcd481ab9754fae874a05a7cd3b8c92540515918045fea/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4c8238508dd48885d3bcd481ab9754fae874a05a7cd3b8c92540515918045fea/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4c8238508dd48885d3bcd481ab9754fae874a05a7cd3b8c92540515918045fea/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-20220921220439-10174",
	                "Source": "/var/lib/docker/volumes/embed-certs-20220921220439-10174/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-20220921220439-10174",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-20220921220439-10174",
	                "name.minikube.sigs.k8s.io": "embed-certs-20220921220439-10174",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9eafa65cab570427f54e672c314a2de414b922ec2d5c452fa77eb94dc7c53c9e",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49398"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49397"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49394"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49396"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49395"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/9eafa65cab57",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-20220921220439-10174": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "0efc3a031048",
	                        "embed-certs-20220921220439-10174"
	                    ],
	                    "NetworkID": "e71aa30fd3ace87130e43e4abce1f2566d43d95c3b2e37ab1594e3c5a105c1bc",
	                    "EndpointID": "e12f2a7ae893a2d247b22ed045ec225e1db5924afdba9eb642a202517e80b83a",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220921220439-10174 -n embed-certs-20220921220439-10174
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-20220921220439-10174 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-20220921220439-10174 logs -n 25: (1.006480165s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/DeployApp logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| Command |                            Args                            |                     Profile                     |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p                                                         | enable-default-cni-20220921215523-10174         | jenkins | v1.27.0 | 21 Sep 22 21:59 UTC | 21 Sep 22 22:04 UTC |
	|         | enable-default-cni-20220921215523-10174                    |                                                 |         |         |                     |                     |
	|         | --memory=2048 --alsologtostderr                            |                                                 |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m                              |                                                 |         |         |                     |                     |
	|         | --enable-default-cni=true                                  |                                                 |         |         |                     |                     |
	|         | --driver=docker                                            |                                                 |         |         |                     |                     |
	|         | --container-runtime=containerd                             |                                                 |         |         |                     |                     |
	| ssh     | -p cilium-20220921215524-10174                             | cilium-20220921215524-10174                     | jenkins | v1.27.0 | 21 Sep 22 22:01 UTC | 21 Sep 22 22:01 UTC |
	|         | pgrep -a kubelet                                           |                                                 |         |         |                     |                     |
	| delete  | -p cilium-20220921215524-10174                             | cilium-20220921215524-10174                     | jenkins | v1.27.0 | 21 Sep 22 22:01 UTC | 21 Sep 22 22:01 UTC |
	| start   | -p bridge-20220921215523-10174                             | bridge-20220921215523-10174                     | jenkins | v1.27.0 | 21 Sep 22 22:01 UTC | 21 Sep 22 22:01 UTC |
	|         | --memory=2048                                              |                                                 |         |         |                     |                     |
	|         | --alsologtostderr                                          |                                                 |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m                              |                                                 |         |         |                     |                     |
	|         | --cni=bridge --driver=docker                               |                                                 |         |         |                     |                     |
	|         | --container-runtime=containerd                             |                                                 |         |         |                     |                     |
	| ssh     | -p bridge-20220921215523-10174                             | bridge-20220921215523-10174                     | jenkins | v1.27.0 | 21 Sep 22 22:01 UTC | 21 Sep 22 22:01 UTC |
	|         | pgrep -a kubelet                                           |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | kubernetes-upgrade-20220921215522-10174         | jenkins | v1.27.0 | 21 Sep 22 22:04 UTC | 21 Sep 22 22:04 UTC |
	|         | kubernetes-upgrade-20220921215522-10174                    |                                                 |         |         |                     |                     |
	| start   | -p                                                         | embed-certs-20220921220439-10174                | jenkins | v1.27.0 | 21 Sep 22 22:04 UTC |                     |
	|         | embed-certs-20220921220439-10174                           |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                     |                     |
	|         | --wait=true --embed-certs                                  |                                                 |         |         |                     |                     |
	|         | --driver=docker                                            |                                                 |         |         |                     |                     |
	|         | --container-runtime=containerd                             |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.2                               |                                                 |         |         |                     |                     |
	| ssh     | -p                                                         | enable-default-cni-20220921215523-10174         | jenkins | v1.27.0 | 21 Sep 22 22:04 UTC | 21 Sep 22 22:04 UTC |
	|         | enable-default-cni-20220921215523-10174                    |                                                 |         |         |                     |                     |
	|         | pgrep -a kubelet                                           |                                                 |         |         |                     |                     |
	| delete  | -p bridge-20220921215523-10174                             | bridge-20220921215523-10174                     | jenkins | v1.27.0 | 21 Sep 22 22:07 UTC | 21 Sep 22 22:07 UTC |
	| start   | -p                                                         | old-k8s-version-20220921220722-10174            | jenkins | v1.27.0 | 21 Sep 22 22:07 UTC | 21 Sep 22 22:09 UTC |
	|         | old-k8s-version-20220921220722-10174                       |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                          |                                                 |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                              |                                                 |         |         |                     |                     |
	|         | --disable-driver-mounts                                    |                                                 |         |         |                     |                     |
	|         | --keep-context=false --driver=docker                       |                                                 |         |         |                     |                     |
	|         |  --container-runtime=containerd                            |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                               |                                                 |         |         |                     |                     |
	| delete  | -p calico-20220921215524-10174                             | calico-20220921215524-10174                     | jenkins | v1.27.0 | 21 Sep 22 22:08 UTC | 21 Sep 22 22:08 UTC |
	| delete  | -p                                                         | disable-driver-mounts-20220921220831-10174      | jenkins | v1.27.0 | 21 Sep 22 22:08 UTC | 21 Sep 22 22:08 UTC |
	|         | disable-driver-mounts-20220921220831-10174                 |                                                 |         |         |                     |                     |
	| start   | -p                                                         | no-preload-20220921220832-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:08 UTC |                     |
	|         | no-preload-20220921220832-10174                            |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                     |                     |
	|         | --wait=true --preload=false                                |                                                 |         |         |                     |                     |
	|         | --driver=docker                                            |                                                 |         |         |                     |                     |
	|         | --container-runtime=containerd                             |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.2                               |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | old-k8s-version-20220921220722-10174            | jenkins | v1.27.0 | 21 Sep 22 22:09 UTC | 21 Sep 22 22:09 UTC |
	|         | old-k8s-version-20220921220722-10174                       |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                     |                     |
	| stop    | -p                                                         | old-k8s-version-20220921220722-10174            | jenkins | v1.27.0 | 21 Sep 22 22:09 UTC | 21 Sep 22 22:09 UTC |
	|         | old-k8s-version-20220921220722-10174                       |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | old-k8s-version-20220921220722-10174            | jenkins | v1.27.0 | 21 Sep 22 22:09 UTC | 21 Sep 22 22:09 UTC |
	|         | old-k8s-version-20220921220722-10174                       |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                     |                     |
	| start   | -p                                                         | old-k8s-version-20220921220722-10174            | jenkins | v1.27.0 | 21 Sep 22 22:09 UTC | 21 Sep 22 22:17 UTC |
	|         | old-k8s-version-20220921220722-10174                       |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                          |                                                 |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                              |                                                 |         |         |                     |                     |
	|         | --disable-driver-mounts                                    |                                                 |         |         |                     |                     |
	|         | --keep-context=false --driver=docker                       |                                                 |         |         |                     |                     |
	|         |  --container-runtime=containerd                            |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                               |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | enable-default-cni-20220921215523-10174         | jenkins | v1.27.0 | 21 Sep 22 22:11 UTC | 21 Sep 22 22:11 UTC |
	|         | enable-default-cni-20220921215523-10174                    |                                                 |         |         |                     |                     |
	| start   | -p                                                         | default-k8s-different-port-20220921221118-10174 | jenkins | v1.27.0 | 21 Sep 22 22:11 UTC |                     |
	|         | default-k8s-different-port-20220921221118-10174            |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true                |                                                 |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker                      |                                                 |         |         |                     |                     |
	|         |  --container-runtime=containerd                            |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.2                               |                                                 |         |         |                     |                     |
	| ssh     | -p                                                         | old-k8s-version-20220921220722-10174            | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | old-k8s-version-20220921220722-10174                       |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                     |                     |
	| pause   | -p                                                         | old-k8s-version-20220921220722-10174            | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | old-k8s-version-20220921220722-10174                       |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| unpause | -p                                                         | old-k8s-version-20220921220722-10174            | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | old-k8s-version-20220921220722-10174                       |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | old-k8s-version-20220921220722-10174            | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | old-k8s-version-20220921220722-10174                       |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | old-k8s-version-20220921220722-10174            | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | old-k8s-version-20220921220722-10174                       |                                                 |         |         |                     |                     |
	| start   | -p newest-cni-20220921221720-10174 --memory=2200           | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC |                     |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.2                               |                                                 |         |         |                     |                     |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/09/21 22:17:20
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.19.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0921 22:17:20.117394  263277 out.go:296] Setting OutFile to fd 1 ...
	I0921 22:17:20.117573  263277 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:17:20.117583  263277 out.go:309] Setting ErrFile to fd 2...
	I0921 22:17:20.117588  263277 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:17:20.117716  263277 root.go:333] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/bin
	I0921 22:17:20.118368  263277 out.go:303] Setting JSON to false
	I0921 22:17:20.119891  263277 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3591,"bootTime":1663795049,"procs":490,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1017-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0921 22:17:20.119964  263277 start.go:125] virtualization: kvm guest
	I0921 22:17:20.122564  263277 out.go:177] * [newest-cni-20220921221720-10174] minikube v1.27.0 on Ubuntu 20.04 (kvm/amd64)
	I0921 22:17:20.124773  263277 out.go:177]   - MINIKUBE_LOCATION=14995
	I0921 22:17:20.124727  263277 notify.go:214] Checking for updates...
	I0921 22:17:20.126509  263277 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0921 22:17:20.128222  263277 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
	I0921 22:17:20.129773  263277 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube
	I0921 22:17:20.131197  263277 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0921 22:17:20.133406  263277 config.go:180] Loaded profile config "default-k8s-different-port-20220921221118-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.2
	I0921 22:17:20.133555  263277 config.go:180] Loaded profile config "embed-certs-20220921220439-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.2
	I0921 22:17:20.133696  263277 config.go:180] Loaded profile config "no-preload-20220921220832-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.2
	I0921 22:17:20.133757  263277 driver.go:365] Setting default libvirt URI to qemu:///system
	I0921 22:17:20.170498  263277 docker.go:137] docker version: linux-20.10.18
	I0921 22:17:20.170701  263277 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:17:20.282412  263277 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:49 SystemTime:2022-09-21 22:17:20.19415732 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1017-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.18 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientI
nfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0921 22:17:20.282569  263277 docker.go:254] overlay module found
	I0921 22:17:20.285062  263277 out.go:177] * Using the docker driver based on user configuration
	I0921 22:17:20.286482  263277 start.go:284] selected driver: docker
	I0921 22:17:20.286513  263277 start.go:808] validating driver "docker" against <nil>
	I0921 22:17:20.286537  263277 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0921 22:17:20.287866  263277 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:17:20.395824  263277 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:49 SystemTime:2022-09-21 22:17:20.311226685 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1017-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.18 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0921 22:17:20.395957  263277 start_flags.go:302] no existing cluster config was found, will generate one from the flags 
	W0921 22:17:20.395979  263277 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0921 22:17:20.396179  263277 start_flags.go:886] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0921 22:17:20.399338  263277 out.go:177] * Using Docker driver with root privileges
	I0921 22:17:20.400999  263277 cni.go:95] Creating CNI manager for ""
	I0921 22:17:20.401023  263277 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0921 22:17:20.401049  263277 start_flags.go:311] Found "CNI" CNI - setting NetworkPlugin=cni
	I0921 22:17:20.401068  263277 start_flags.go:316] config:
	{Name:newest-cni-20220921221720-10174 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:newest-cni-20220921221720-10174 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_v
mnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 22:17:20.402740  263277 out.go:177] * Starting control plane node newest-cni-20220921221720-10174 in cluster newest-cni-20220921221720-10174
	I0921 22:17:20.404321  263277 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0921 22:17:20.405872  263277 out.go:177] * Pulling base image ...
	I0921 22:17:20.407304  263277 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime containerd
	I0921 22:17:20.407357  263277 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.2-containerd-overlay2-amd64.tar.lz4
	I0921 22:17:20.407375  263277 cache.go:57] Caching tarball of preloaded images
	I0921 22:17:20.407402  263277 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	I0921 22:17:20.407658  263277 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0921 22:17:20.407682  263277 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.2 on containerd
	I0921 22:17:20.407871  263277 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/newest-cni-20220921221720-10174/config.json ...
	I0921 22:17:20.407908  263277 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/newest-cni-20220921221720-10174/config.json: {Name:mk2dda77722b71e5bcdfad60ba039f810b44cee9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:17:20.438943  263277 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon, skipping pull
	I0921 22:17:20.438978  263277 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c exists in daemon, skipping load
	I0921 22:17:20.438990  263277 cache.go:208] Successfully downloaded all kic artifacts
	I0921 22:17:20.439040  263277 start.go:364] acquiring machines lock for newest-cni-20220921221720-10174: {Name:mk8430a9f0d2e7c62068c70c502e8bb9880fed55 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:17:20.439211  263277 start.go:368] acquired machines lock for "newest-cni-20220921221720-10174" in 140.911µs
	I0921 22:17:20.439260  263277 start.go:93] Provisioning new machine with config: &{Name:newest-cni-20220921221720-10174 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:newest-cni-20220921221720-10174 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0921 22:17:20.439371  263277 start.go:125] createHost starting for "" (driver="docker")
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	35601481c1b92       d921cee849482       3 minutes ago       Exited              kindnet-cni               3                   01f72770699db
	2c132c99660ac       1c7d8c51823b5       12 minutes ago      Running             kube-proxy                0                   165337b73f95e
	4c0ef4a5b3254       97801f8394908       12 minutes ago      Running             kube-apiserver            0                   21fbb3d04e7ee
	6dc0cbf3dcda3       dbfceb93c69b6       12 minutes ago      Running             kube-controller-manager   0                   c9f16d90611ed
	07e2b5e608591       a8a176a5d5d69       12 minutes ago      Running             etcd                      0                   c5347b7c3fd3f
	50596ff38ce68       ca0ea1ee3cfd3       12 minutes ago      Running             kube-scheduler            0                   a1f596d0a7b61
	
	* 
	* ==> containerd <==
	* -- Logs begin at Wed 2022-09-21 22:04:48 UTC, end at Wed 2022-09-21 22:17:21 UTC. --
	Sep 21 22:10:37 embed-certs-20220921220439-10174 containerd[514]: time="2022-09-21T22:10:37.231676528Z" level=warning msg="cleaning up after shim disconnected" id=173ec9492ce0e14e57dc9b776c742ca7ea6b204dcfa3220d44a992a1b4db2cdc namespace=k8s.io
	Sep 21 22:10:37 embed-certs-20220921220439-10174 containerd[514]: time="2022-09-21T22:10:37.231690505Z" level=info msg="cleaning up dead shim"
	Sep 21 22:10:37 embed-certs-20220921220439-10174 containerd[514]: time="2022-09-21T22:10:37.241912984Z" level=warning msg="cleanup warnings time=\"2022-09-21T22:10:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2469 runtime=io.containerd.runc.v2\n"
	Sep 21 22:10:37 embed-certs-20220921220439-10174 containerd[514]: time="2022-09-21T22:10:37.883464470Z" level=info msg="RemoveContainer for \"ca78ef37b396f92bf9a289e39419479866b6196ad11a7c737fadf39a7a1d54a5\""
	Sep 21 22:10:37 embed-certs-20220921220439-10174 containerd[514]: time="2022-09-21T22:10:37.888812715Z" level=info msg="RemoveContainer for \"ca78ef37b396f92bf9a289e39419479866b6196ad11a7c737fadf39a7a1d54a5\" returns successfully"
	Sep 21 22:10:51 embed-certs-20220921220439-10174 containerd[514]: time="2022-09-21T22:10:51.209849333Z" level=info msg="CreateContainer within sandbox \"01f72770699dba9b47fb1159d7817070ac2f2241cf613a4dde6d31da7bd3e606\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:2,}"
	Sep 21 22:10:51 embed-certs-20220921220439-10174 containerd[514]: time="2022-09-21T22:10:51.224528742Z" level=info msg="CreateContainer within sandbox \"01f72770699dba9b47fb1159d7817070ac2f2241cf613a4dde6d31da7bd3e606\" for &ContainerMetadata{Name:kindnet-cni,Attempt:2,} returns container id \"7671106b9c1f271ff1918ad3a09bd60d1267891c729e2eac7362ec4476185b88\""
	Sep 21 22:10:51 embed-certs-20220921220439-10174 containerd[514]: time="2022-09-21T22:10:51.225025241Z" level=info msg="StartContainer for \"7671106b9c1f271ff1918ad3a09bd60d1267891c729e2eac7362ec4476185b88\""
	Sep 21 22:10:51 embed-certs-20220921220439-10174 containerd[514]: time="2022-09-21T22:10:51.379970418Z" level=info msg="StartContainer for \"7671106b9c1f271ff1918ad3a09bd60d1267891c729e2eac7362ec4476185b88\" returns successfully"
	Sep 21 22:13:31 embed-certs-20220921220439-10174 containerd[514]: time="2022-09-21T22:13:31.828802894Z" level=info msg="shim disconnected" id=7671106b9c1f271ff1918ad3a09bd60d1267891c729e2eac7362ec4476185b88
	Sep 21 22:13:31 embed-certs-20220921220439-10174 containerd[514]: time="2022-09-21T22:13:31.828874359Z" level=warning msg="cleaning up after shim disconnected" id=7671106b9c1f271ff1918ad3a09bd60d1267891c729e2eac7362ec4476185b88 namespace=k8s.io
	Sep 21 22:13:31 embed-certs-20220921220439-10174 containerd[514]: time="2022-09-21T22:13:31.828890498Z" level=info msg="cleaning up dead shim"
	Sep 21 22:13:31 embed-certs-20220921220439-10174 containerd[514]: time="2022-09-21T22:13:31.838864673Z" level=warning msg="cleanup warnings time=\"2022-09-21T22:13:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2582 runtime=io.containerd.runc.v2\n"
	Sep 21 22:13:32 embed-certs-20220921220439-10174 containerd[514]: time="2022-09-21T22:13:32.206042138Z" level=info msg="RemoveContainer for \"173ec9492ce0e14e57dc9b776c742ca7ea6b204dcfa3220d44a992a1b4db2cdc\""
	Sep 21 22:13:32 embed-certs-20220921220439-10174 containerd[514]: time="2022-09-21T22:13:32.212397684Z" level=info msg="RemoveContainer for \"173ec9492ce0e14e57dc9b776c742ca7ea6b204dcfa3220d44a992a1b4db2cdc\" returns successfully"
	Sep 21 22:13:59 embed-certs-20220921220439-10174 containerd[514]: time="2022-09-21T22:13:59.210603544Z" level=info msg="CreateContainer within sandbox \"01f72770699dba9b47fb1159d7817070ac2f2241cf613a4dde6d31da7bd3e606\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:3,}"
	Sep 21 22:13:59 embed-certs-20220921220439-10174 containerd[514]: time="2022-09-21T22:13:59.224586549Z" level=info msg="CreateContainer within sandbox \"01f72770699dba9b47fb1159d7817070ac2f2241cf613a4dde6d31da7bd3e606\" for &ContainerMetadata{Name:kindnet-cni,Attempt:3,} returns container id \"35601481c1b925f8785d0730679803d265cbc261ddf4ca9086c08ec08ce8d10c\""
	Sep 21 22:13:59 embed-certs-20220921220439-10174 containerd[514]: time="2022-09-21T22:13:59.225181730Z" level=info msg="StartContainer for \"35601481c1b925f8785d0730679803d265cbc261ddf4ca9086c08ec08ce8d10c\""
	Sep 21 22:13:59 embed-certs-20220921220439-10174 containerd[514]: time="2022-09-21T22:13:59.289797386Z" level=info msg="StartContainer for \"35601481c1b925f8785d0730679803d265cbc261ddf4ca9086c08ec08ce8d10c\" returns successfully"
	Sep 21 22:16:39 embed-certs-20220921220439-10174 containerd[514]: time="2022-09-21T22:16:39.734742213Z" level=info msg="shim disconnected" id=35601481c1b925f8785d0730679803d265cbc261ddf4ca9086c08ec08ce8d10c
	Sep 21 22:16:39 embed-certs-20220921220439-10174 containerd[514]: time="2022-09-21T22:16:39.734809818Z" level=warning msg="cleaning up after shim disconnected" id=35601481c1b925f8785d0730679803d265cbc261ddf4ca9086c08ec08ce8d10c namespace=k8s.io
	Sep 21 22:16:39 embed-certs-20220921220439-10174 containerd[514]: time="2022-09-21T22:16:39.734826450Z" level=info msg="cleaning up dead shim"
	Sep 21 22:16:39 embed-certs-20220921220439-10174 containerd[514]: time="2022-09-21T22:16:39.745551917Z" level=warning msg="cleanup warnings time=\"2022-09-21T22:16:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2697 runtime=io.containerd.runc.v2\n"
	Sep 21 22:16:40 embed-certs-20220921220439-10174 containerd[514]: time="2022-09-21T22:16:40.549102680Z" level=info msg="RemoveContainer for \"7671106b9c1f271ff1918ad3a09bd60d1267891c729e2eac7362ec4476185b88\""
	Sep 21 22:16:40 embed-certs-20220921220439-10174 containerd[514]: time="2022-09-21T22:16:40.553777114Z" level=info msg="RemoveContainer for \"7671106b9c1f271ff1918ad3a09bd60d1267891c729e2eac7362ec4476185b88\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-20220921220439-10174
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-20220921220439-10174
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=937c68716dfaac5b5ffa3b6655158d5d3472b8c4
	                    minikube.k8s.io/name=embed-certs-20220921220439-10174
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_09_21T22_05_03_0700
	                    minikube.k8s.io/version=v1.27.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 21 Sep 2022 22:04:58 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-20220921220439-10174
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 21 Sep 2022 22:17:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 21 Sep 2022 22:15:24 +0000   Wed, 21 Sep 2022 22:04:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 21 Sep 2022 22:15:24 +0000   Wed, 21 Sep 2022 22:04:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 21 Sep 2022 22:15:24 +0000   Wed, 21 Sep 2022 22:04:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 21 Sep 2022 22:15:24 +0000   Wed, 21 Sep 2022 22:04:56 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    embed-certs-20220921220439-10174
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871748Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871748Ki
	  pods:               110
	System Info:
	  Machine ID:                 40026c506a9948afaae3b0fda0a61c83
	  System UUID:                39299add-007b-4517-8e1f-4d420ff2375f
	  Boot ID:                    878f3e16-9143-42f3-b848-1a740c7f26bf
	  Kernel Version:             5.15.0-1017-gcp
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.6.8
	  Kubelet Version:            v1.25.2
	  Kube-Proxy Version:         v1.25.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                        ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-embed-certs-20220921220439-10174                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-mqr9d                                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      12m
	  kube-system                 kube-apiserver-embed-certs-20220921220439-10174             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-embed-certs-20220921220439-10174    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-s7c85                                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-embed-certs-20220921220439-10174             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  NodeHasSufficientMemory  12m (x5 over 12m)  kubelet          Node embed-certs-20220921220439-10174 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x5 over 12m)  kubelet          Node embed-certs-20220921220439-10174 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x4 over 12m)  kubelet          Node embed-certs-20220921220439-10174 status is now: NodeHasSufficientPID
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node embed-certs-20220921220439-10174 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node embed-certs-20220921220439-10174 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet          Node embed-certs-20220921220439-10174 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           12m                node-controller  Node embed-certs-20220921220439-10174 event: Registered Node embed-certs-20220921220439-10174 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000005] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +2.959863] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.003881] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.023897] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[Sep21 22:10] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.005087] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[Sep21 22:11] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +2.967845] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.031851] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.027935] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +2.943864] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.019893] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.023889] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	
	* 
	* ==> etcd [07e2b5e608591e85913066c0986a5bd5ea1bf1e68a095ae9fea95c89af2a5837] <==
	* {"level":"info","ts":"2022-09-21T22:04:55.692Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-09-21T22:04:56.579Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 1"}
	{"level":"info","ts":"2022-09-21T22:04:56.580Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-09-21T22:04:56.580Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 1"}
	{"level":"info","ts":"2022-09-21T22:04:56.580Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 2"}
	{"level":"info","ts":"2022-09-21T22:04:56.580Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2022-09-21T22:04:56.580Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 2"}
	{"level":"info","ts":"2022-09-21T22:04:56.580Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2022-09-21T22:04:56.580Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-21T22:04:56.581Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-21T22:04:56.581Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-21T22:04:56.581Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-21T22:04:56.581Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-09-21T22:04:56.581Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:embed-certs-20220921220439-10174 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2022-09-21T22:04:56.581Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-09-21T22:04:56.581Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-09-21T22:04:56.581Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-09-21T22:04:56.583Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-09-21T22:04:56.583Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	{"level":"warn","ts":"2022-09-21T22:09:10.633Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"111.024913ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/embed-certs-20220921220439-10174\" ","response":"range_response_count:1 size:4776"}
	{"level":"warn","ts":"2022-09-21T22:09:10.633Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"185.059944ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/default/kubernetes\" ","response":"range_response_count:1 size:420"}
	{"level":"info","ts":"2022-09-21T22:09:10.633Z","caller":"traceutil/trace.go:171","msg":"trace[1448235816] range","detail":"{range_begin:/registry/minions/embed-certs-20220921220439-10174; range_end:; response_count:1; response_revision:435; }","duration":"111.159312ms","start":"2022-09-21T22:09:10.521Z","end":"2022-09-21T22:09:10.633Z","steps":["trace[1448235816] 'agreement among raft nodes before linearized reading'  (duration: 14.689355ms)","trace[1448235816] 'range keys from in-memory index tree'  (duration: 96.284607ms)"],"step_count":2}
	{"level":"info","ts":"2022-09-21T22:09:10.633Z","caller":"traceutil/trace.go:171","msg":"trace[467199965] range","detail":"{range_begin:/registry/services/endpoints/default/kubernetes; range_end:; response_count:1; response_revision:435; }","duration":"185.216087ms","start":"2022-09-21T22:09:10.447Z","end":"2022-09-21T22:09:10.633Z","steps":["trace[467199965] 'agreement among raft nodes before linearized reading'  (duration: 88.71972ms)","trace[467199965] 'range keys from in-memory index tree'  (duration: 96.312032ms)"],"step_count":2}
	{"level":"info","ts":"2022-09-21T22:14:56.798Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":447}
	{"level":"info","ts":"2022-09-21T22:14:56.799Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":447,"took":"454.164µs"}
	
	* 
	* ==> kernel <==
	*  22:17:21 up 59 min,  0 users,  load average: 0.51, 1.78, 2.14
	Linux embed-certs-20220921220439-10174 5.15.0-1017-gcp #23~20.04.2-Ubuntu SMP Wed Aug 17 02:46:40 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kube-apiserver [4c0ef4a5b32546e86e84aa28e2b53370eb4c462d47208c2f4053d8a94da4e5d0] <==
	* I0921 22:04:58.970037       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0921 22:04:58.970094       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0921 22:04:58.970121       1 apf_controller.go:305] Running API Priority and Fairness config worker
	I0921 22:04:58.970138       1 cache.go:39] Caches are synced for autoregister controller
	I0921 22:04:58.970794       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0921 22:04:58.975648       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0921 22:04:58.981933       1 controller.go:616] quota admission added evaluator for: namespaces
	I0921 22:04:58.982835       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0921 22:04:59.641473       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0921 22:04:59.874406       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0921 22:04:59.877433       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0921 22:04:59.877457       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0921 22:05:00.281431       1 controller.go:616] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0921 22:05:00.320037       1 controller.go:616] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0921 22:05:00.423949       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0921 22:05:00.430489       1 lease.go:250] Resetting endpoints for master service "kubernetes" to [192.168.67.2]
	I0921 22:05:00.431476       1 controller.go:616] quota admission added evaluator for: endpoints
	I0921 22:05:00.435394       1 controller.go:616] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0921 22:05:00.922548       1 controller.go:616] quota admission added evaluator for: serviceaccounts
	I0921 22:05:02.028149       1 controller.go:616] quota admission added evaluator for: deployments.apps
	I0921 22:05:02.035424       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0921 22:05:02.043930       1 controller.go:616] quota admission added evaluator for: daemonsets.apps
	I0921 22:05:02.121398       1 controller.go:616] quota admission added evaluator for: leases.coordination.k8s.io
	I0921 22:05:14.537469       1 controller.go:616] quota admission added evaluator for: replicasets.apps
	I0921 22:05:14.636887       1 controller.go:616] quota admission added evaluator for: controllerrevisions.apps
	
	* 
	* ==> kube-controller-manager [6dc0cbf3dcda3fe8512f7ac309f5980e6a0d33dedf1aafdf4d79890ef21016e9] <==
	* I0921 22:05:13.874626       1 shared_informer.go:262] Caches are synced for persistent volume
	I0921 22:05:13.886424       1 shared_informer.go:262] Caches are synced for resource quota
	I0921 22:05:13.917195       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
	I0921 22:05:13.918384       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0921 22:05:13.918417       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I0921 22:05:13.918466       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
	I0921 22:05:13.930142       1 shared_informer.go:262] Caches are synced for taint
	I0921 22:05:13.930254       1 taint_manager.go:204] "Starting NoExecuteTaintManager"
	I0921 22:05:13.930304       1 taint_manager.go:209] "Sending events to api server"
	I0921 22:05:13.930264       1 node_lifecycle_controller.go:1443] Initializing eviction metric for zone: 
	I0921 22:05:13.930331       1 event.go:294] "Event occurred" object="embed-certs-20220921220439-10174" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node embed-certs-20220921220439-10174 event: Registered Node embed-certs-20220921220439-10174 in Controller"
	W0921 22:05:13.930431       1 node_lifecycle_controller.go:1058] Missing timestamp for Node embed-certs-20220921220439-10174. Assuming now as a timestamp.
	I0921 22:05:13.930465       1 shared_informer.go:262] Caches are synced for daemon sets
	I0921 22:05:13.930488       1 node_lifecycle_controller.go:1209] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0921 22:05:13.983455       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I0921 22:05:14.306350       1 shared_informer.go:262] Caches are synced for garbage collector
	I0921 22:05:14.321838       1 shared_informer.go:262] Caches are synced for garbage collector
	I0921 22:05:14.321866       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0921 22:05:14.539394       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-565d847f94 to 2"
	I0921 22:05:14.642432       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-s7c85"
	I0921 22:05:14.643931       1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-mqr9d"
	I0921 22:05:14.793697       1 event.go:294] "Event occurred" object="kube-system/coredns-565d847f94" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-565d847f94-7xblt"
	I0921 22:05:14.797759       1 event.go:294] "Event occurred" object="kube-system/coredns-565d847f94" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-565d847f94-qn9gp"
	I0921 22:05:14.910189       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-565d847f94 to 1 from 2"
	I0921 22:05:14.921777       1 event.go:294] "Event occurred" object="kube-system/coredns-565d847f94" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-565d847f94-7xblt"
	
	* 
	* ==> kube-proxy [2c132c99660ac3b6987754acaccbc87f631bc9ffc4dade2b77ad96eef8d04334] <==
	* I0921 22:05:15.218738       1 node.go:163] Successfully retrieved node IP: 192.168.67.2
	I0921 22:05:15.218815       1 server_others.go:138] "Detected node IP" address="192.168.67.2"
	I0921 22:05:15.218851       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0921 22:05:15.238164       1 server_others.go:206] "Using iptables Proxier"
	I0921 22:05:15.238196       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0921 22:05:15.238214       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0921 22:05:15.238239       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0921 22:05:15.238267       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0921 22:05:15.238431       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0921 22:05:15.239025       1 server.go:661] "Version info" version="v1.25.2"
	I0921 22:05:15.239051       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0921 22:05:15.240122       1 config.go:317] "Starting service config controller"
	I0921 22:05:15.240165       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0921 22:05:15.240172       1 config.go:226] "Starting endpoint slice config controller"
	I0921 22:05:15.240184       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0921 22:05:15.240219       1 config.go:444] "Starting node config controller"
	I0921 22:05:15.240262       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0921 22:05:15.340574       1 shared_informer.go:262] Caches are synced for node config
	I0921 22:05:15.340602       1 shared_informer.go:262] Caches are synced for service config
	I0921 22:05:15.340643       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [50596ff38ce686be7705ce5777cdda9e90065d702d371e9f4371f62b19f49c34] <==
	* E0921 22:04:58.990827       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0921 22:04:58.990832       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0921 22:04:58.990783       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0921 22:04:58.990919       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0921 22:04:58.990919       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0921 22:04:58.990948       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0921 22:04:58.990679       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0921 22:04:58.990969       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0921 22:04:58.990956       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0921 22:04:58.991007       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0921 22:04:59.818146       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0921 22:04:59.818183       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0921 22:04:59.872480       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0921 22:04:59.872523       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0921 22:05:00.056545       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0921 22:05:00.056587       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0921 22:05:00.076767       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0921 22:05:00.076819       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0921 22:05:00.086747       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0921 22:05:00.086784       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0921 22:05:00.106863       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0921 22:05:00.106895       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0921 22:05:00.153313       1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0921 22:05:00.153358       1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0921 22:05:02.886016       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2022-09-21 22:04:48 UTC, end at Wed 2022-09-21 22:17:21 UTC. --
	Sep 21 22:15:52 embed-certs-20220921220439-10174 kubelet[1309]: E0921 22:15:52.561330    1309 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:15:57 embed-certs-20220921220439-10174 kubelet[1309]: E0921 22:15:57.562339    1309 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:16:02 embed-certs-20220921220439-10174 kubelet[1309]: E0921 22:16:02.563636    1309 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:16:07 embed-certs-20220921220439-10174 kubelet[1309]: E0921 22:16:07.565144    1309 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:16:12 embed-certs-20220921220439-10174 kubelet[1309]: E0921 22:16:12.566190    1309 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:16:17 embed-certs-20220921220439-10174 kubelet[1309]: E0921 22:16:17.567322    1309 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:16:22 embed-certs-20220921220439-10174 kubelet[1309]: E0921 22:16:22.568995    1309 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:16:27 embed-certs-20220921220439-10174 kubelet[1309]: E0921 22:16:27.570744    1309 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:16:32 embed-certs-20220921220439-10174 kubelet[1309]: E0921 22:16:32.571705    1309 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:16:37 embed-certs-20220921220439-10174 kubelet[1309]: E0921 22:16:37.573156    1309 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:16:40 embed-certs-20220921220439-10174 kubelet[1309]: I0921 22:16:40.547821    1309 scope.go:115] "RemoveContainer" containerID="7671106b9c1f271ff1918ad3a09bd60d1267891c729e2eac7362ec4476185b88"
	Sep 21 22:16:40 embed-certs-20220921220439-10174 kubelet[1309]: I0921 22:16:40.548127    1309 scope.go:115] "RemoveContainer" containerID="35601481c1b925f8785d0730679803d265cbc261ddf4ca9086c08ec08ce8d10c"
	Sep 21 22:16:40 embed-certs-20220921220439-10174 kubelet[1309]: E0921 22:16:40.548414    1309 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-mqr9d_kube-system(1dcc030c-e4fc-498d-a309-94f66d79cd24)\"" pod="kube-system/kindnet-mqr9d" podUID=1dcc030c-e4fc-498d-a309-94f66d79cd24
	Sep 21 22:16:42 embed-certs-20220921220439-10174 kubelet[1309]: E0921 22:16:42.574410    1309 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:16:47 embed-certs-20220921220439-10174 kubelet[1309]: E0921 22:16:47.575882    1309 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:16:52 embed-certs-20220921220439-10174 kubelet[1309]: E0921 22:16:52.576614    1309 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:16:53 embed-certs-20220921220439-10174 kubelet[1309]: I0921 22:16:53.207362    1309 scope.go:115] "RemoveContainer" containerID="35601481c1b925f8785d0730679803d265cbc261ddf4ca9086c08ec08ce8d10c"
	Sep 21 22:16:53 embed-certs-20220921220439-10174 kubelet[1309]: E0921 22:16:53.207665    1309 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-mqr9d_kube-system(1dcc030c-e4fc-498d-a309-94f66d79cd24)\"" pod="kube-system/kindnet-mqr9d" podUID=1dcc030c-e4fc-498d-a309-94f66d79cd24
	Sep 21 22:16:57 embed-certs-20220921220439-10174 kubelet[1309]: E0921 22:16:57.577345    1309 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:17:02 embed-certs-20220921220439-10174 kubelet[1309]: E0921 22:17:02.578786    1309 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:17:07 embed-certs-20220921220439-10174 kubelet[1309]: E0921 22:17:07.580160    1309 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:17:08 embed-certs-20220921220439-10174 kubelet[1309]: I0921 22:17:08.208399    1309 scope.go:115] "RemoveContainer" containerID="35601481c1b925f8785d0730679803d265cbc261ddf4ca9086c08ec08ce8d10c"
	Sep 21 22:17:08 embed-certs-20220921220439-10174 kubelet[1309]: E0921 22:17:08.208814    1309 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-mqr9d_kube-system(1dcc030c-e4fc-498d-a309-94f66d79cd24)\"" pod="kube-system/kindnet-mqr9d" podUID=1dcc030c-e4fc-498d-a309-94f66d79cd24
	Sep 21 22:17:12 embed-certs-20220921220439-10174 kubelet[1309]: E0921 22:17:12.581846    1309 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:17:17 embed-certs-20220921220439-10174 kubelet[1309]: E0921 22:17:17.583299    1309 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20220921220439-10174 -n embed-certs-20220921220439-10174
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-20220921220439-10174 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: busybox coredns-565d847f94-qn9gp storage-provisioner
helpers_test.go:272: ======> post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context embed-certs-20220921220439-10174 describe pod busybox coredns-565d847f94-qn9gp storage-provisioner
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context embed-certs-20220921220439-10174 describe pod busybox coredns-565d847f94-qn9gp storage-provisioner: exit status 1 (74.052849ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wf26r (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-wf26r:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                   From               Message
	  ----     ------            ----                  ----               -------
	  Warning  FailedScheduling  2m49s (x2 over 8m5s)  default-scheduler  0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "coredns-565d847f94-qn9gp" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context embed-certs-20220921220439-10174 describe pod busybox coredns-565d847f94-qn9gp storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (484.94s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/FirstStart (276.5s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-different-port-20220921221118-10174 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.2
E0921 22:11:29.832199   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/cilium-20220921215524-10174/client.crt: no such file or directory
E0921 22:11:38.505367   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/ingress-addon-legacy-20220921213512-10174/client.crt: no such file or directory
E0921 22:11:59.249958   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/bridge-20220921215523-10174/client.crt: no such file or directory
E0921 22:11:59.255223   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/bridge-20220921215523-10174/client.crt: no such file or directory
E0921 22:11:59.265478   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/bridge-20220921215523-10174/client.crt: no such file or directory
E0921 22:11:59.285722   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/bridge-20220921215523-10174/client.crt: no such file or directory
E0921 22:11:59.325979   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/bridge-20220921215523-10174/client.crt: no such file or directory
E0921 22:11:59.407127   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/bridge-20220921215523-10174/client.crt: no such file or directory
E0921 22:11:59.567541   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/bridge-20220921215523-10174/client.crt: no such file or directory
E0921 22:11:59.888593   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/bridge-20220921215523-10174/client.crt: no such file or directory
E0921 22:12:00.529035   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/bridge-20220921215523-10174/client.crt: no such file or directory
E0921 22:12:01.809613   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/bridge-20220921215523-10174/client.crt: no such file or directory
E0921 22:12:04.370727   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/bridge-20220921215523-10174/client.crt: no such file or directory
E0921 22:12:09.491458   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/bridge-20220921215523-10174/client.crt: no such file or directory
E0921 22:12:19.732398   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/bridge-20220921215523-10174/client.crt: no such file or directory
E0921 22:12:23.527180   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/functional-20220921213235-10174/client.crt: no such file or directory
E0921 22:12:40.212629   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/bridge-20220921215523-10174/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p default-k8s-different-port-20220921221118-10174 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.2: exit status 80 (4m34.52025724s)

                                                
                                                
-- stdout --
	* [default-k8s-different-port-20220921221118-10174] minikube v1.27.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14995
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting control plane node default-k8s-different-port-20220921221118-10174 in cluster default-k8s-different-port-20220921221118-10174
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.25.2 on containerd 1.6.8 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: default-storageclass, storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0921 22:11:18.087901  251080 out.go:296] Setting OutFile to fd 1 ...
	I0921 22:11:18.088024  251080 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:11:18.088036  251080 out.go:309] Setting ErrFile to fd 2...
	I0921 22:11:18.088042  251080 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:11:18.088174  251080 root.go:333] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/bin
	I0921 22:11:18.088746  251080 out.go:303] Setting JSON to false
	I0921 22:11:18.090393  251080 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3229,"bootTime":1663795049,"procs":653,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1017-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0921 22:11:18.090456  251080 start.go:125] virtualization: kvm guest
	I0921 22:11:18.093408  251080 out.go:177] * [default-k8s-different-port-20220921221118-10174] minikube v1.27.0 on Ubuntu 20.04 (kvm/amd64)
	I0921 22:11:18.094844  251080 notify.go:214] Checking for updates...
	I0921 22:11:18.096337  251080 out.go:177]   - MINIKUBE_LOCATION=14995
	I0921 22:11:18.097775  251080 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0921 22:11:18.099219  251080 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
	I0921 22:11:18.100740  251080 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube
	I0921 22:11:18.102389  251080 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0921 22:11:18.104495  251080 config.go:180] Loaded profile config "embed-certs-20220921220439-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.2
	I0921 22:11:18.104651  251080 config.go:180] Loaded profile config "no-preload-20220921220832-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.2
	I0921 22:11:18.104807  251080 config.go:180] Loaded profile config "old-k8s-version-20220921220722-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0921 22:11:18.104881  251080 driver.go:365] Setting default libvirt URI to qemu:///system
	I0921 22:11:18.138312  251080 docker.go:137] docker version: linux-20.10.18
	I0921 22:11:18.138426  251080 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:11:18.232188  251080 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:true NGoroutines:57 SystemTime:2022-09-21 22:11:18.15986917 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1017-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.18 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientI
nfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0921 22:11:18.232324  251080 docker.go:254] overlay module found
	I0921 22:11:18.234351  251080 out.go:177] * Using the docker driver based on user configuration
	I0921 22:11:18.235767  251080 start.go:284] selected driver: docker
	I0921 22:11:18.235790  251080 start.go:808] validating driver "docker" against <nil>
	I0921 22:11:18.235809  251080 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0921 22:11:18.236643  251080 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:11:18.330559  251080 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:true NGoroutines:57 SystemTime:2022-09-21 22:11:18.257769036 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1017-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.18 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0921 22:11:18.330687  251080 start_flags.go:302] no existing cluster config was found, will generate one from the flags 
	I0921 22:11:18.330876  251080 start_flags.go:867] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0921 22:11:18.332978  251080 out.go:177] * Using Docker driver with root privileges
	I0921 22:11:18.334347  251080 cni.go:95] Creating CNI manager for ""
	I0921 22:11:18.334364  251080 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0921 22:11:18.334381  251080 start_flags.go:311] Found "CNI" CNI - setting NetworkPlugin=cni
	I0921 22:11:18.334405  251080 start_flags.go:316] config:
	{Name:default-k8s-different-port-20220921221118-10174 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:default-k8s-different-port-20220921221118-10174 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDo
main:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/soc
ket_vmnet}
	I0921 22:11:18.336049  251080 out.go:177] * Starting control plane node default-k8s-different-port-20220921221118-10174 in cluster default-k8s-different-port-20220921221118-10174
	I0921 22:11:18.337335  251080 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0921 22:11:18.338625  251080 out.go:177] * Pulling base image ...
	I0921 22:11:18.339915  251080 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime containerd
	I0921 22:11:18.339961  251080 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.2-containerd-overlay2-amd64.tar.lz4
	I0921 22:11:18.339976  251080 cache.go:57] Caching tarball of preloaded images
	I0921 22:11:18.340010  251080 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	I0921 22:11:18.340234  251080 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0921 22:11:18.340259  251080 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.2 on containerd
	I0921 22:11:18.340397  251080 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/config.json ...
	I0921 22:11:18.340430  251080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/config.json: {Name:mk68817f4bf887721f92775083cbcee80d5fb68a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:11:18.367818  251080 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon, skipping pull
	I0921 22:11:18.367843  251080 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c exists in daemon, skipping load
	I0921 22:11:18.367856  251080 cache.go:208] Successfully downloaded all kic artifacts
	I0921 22:11:18.367892  251080 start.go:364] acquiring machines lock for default-k8s-different-port-20220921221118-10174: {Name:mk6a2906d520bc1db61074ef435cf249d094e940 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:11:18.368018  251080 start.go:368] acquired machines lock for "default-k8s-different-port-20220921221118-10174" in 101.344µs
	I0921 22:11:18.368055  251080 start.go:93] Provisioning new machine with config: &{Name:default-k8s-different-port-20220921221118-10174 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:default-k8s-different-port-20220921221118-10174
Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8444 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0921 22:11:18.368157  251080 start.go:125] createHost starting for "" (driver="docker")
	I0921 22:11:18.370528  251080 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0921 22:11:18.370720  251080 start.go:159] libmachine.API.Create for "default-k8s-different-port-20220921221118-10174" (driver="docker")
	I0921 22:11:18.370744  251080 client.go:168] LocalClient.Create starting
	I0921 22:11:18.370817  251080 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem
	I0921 22:11:18.370845  251080 main.go:134] libmachine: Decoding PEM data...
	I0921 22:11:18.370861  251080 main.go:134] libmachine: Parsing certificate...
	I0921 22:11:18.370925  251080 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/cert.pem
	I0921 22:11:18.370944  251080 main.go:134] libmachine: Decoding PEM data...
	I0921 22:11:18.370953  251080 main.go:134] libmachine: Parsing certificate...
	I0921 22:11:18.371236  251080 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220921221118-10174 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:11:18.395515  251080 cli_runner.go:211] docker network inspect default-k8s-different-port-20220921221118-10174 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:11:18.395579  251080 network_create.go:272] running [docker network inspect default-k8s-different-port-20220921221118-10174] to gather additional debugging logs...
	I0921 22:11:18.395600  251080 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220921221118-10174
	W0921 22:11:18.419547  251080 cli_runner.go:211] docker network inspect default-k8s-different-port-20220921221118-10174 returned with exit code 1
	I0921 22:11:18.419579  251080 network_create.go:275] error running [docker network inspect default-k8s-different-port-20220921221118-10174]: docker network inspect default-k8s-different-port-20220921221118-10174: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: default-k8s-different-port-20220921221118-10174
	I0921 22:11:18.419591  251080 network_create.go:277] output of [docker network inspect default-k8s-different-port-20220921221118-10174]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: default-k8s-different-port-20220921221118-10174
	
	** /stderr **
	I0921 22:11:18.419643  251080 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 22:11:18.444258  251080 network.go:241] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-b7c23e57d062 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:a3:39:9d:03}}
	I0921 22:11:18.445274  251080 network.go:241] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-bfa8cb3d5f9b IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:8c:39:36:0c}}
	I0921 22:11:18.446196  251080 network.go:241] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName:br-e71aa30fd3ac IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:7a:b1:c8:c1}}
	I0921 22:11:18.447244  251080 network.go:241] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName:br-4f93bc2f061a IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:ca:b2:42:ce}}
	I0921 22:11:18.448755  251080 network.go:290] reserving subnet 192.168.85.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.85.0:0xc00012cb10] misses:0}
	I0921 22:11:18.448802  251080 network.go:236] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 22:11:18.448826  251080 network_create.go:115] attempt to create docker network default-k8s-different-port-20220921221118-10174 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0921 22:11:18.448915  251080 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-different-port-20220921221118-10174 default-k8s-different-port-20220921221118-10174
	I0921 22:11:18.510820  251080 network_create.go:99] docker network default-k8s-different-port-20220921221118-10174 192.168.85.0/24 created
	I0921 22:11:18.510857  251080 kic.go:106] calculated static IP "192.168.85.2" for the "default-k8s-different-port-20220921221118-10174" container
	I0921 22:11:18.510919  251080 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0921 22:11:18.536329  251080 cli_runner.go:164] Run: docker volume create default-k8s-different-port-20220921221118-10174 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220921221118-10174 --label created_by.minikube.sigs.k8s.io=true
	I0921 22:11:18.561443  251080 oci.go:103] Successfully created a docker volume default-k8s-different-port-20220921221118-10174
	I0921 22:11:18.561538  251080 cli_runner.go:164] Run: docker run --rm --name default-k8s-different-port-20220921221118-10174-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220921221118-10174 --entrypoint /usr/bin/test -v default-k8s-different-port-20220921221118-10174:/var gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c -d /var/lib
	I0921 22:11:19.127923  251080 oci.go:107] Successfully prepared a docker volume default-k8s-different-port-20220921221118-10174
	I0921 22:11:19.127974  251080 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime containerd
	I0921 22:11:19.127994  251080 kic.go:179] Starting extracting preloaded images to volume ...
	I0921 22:11:19.128049  251080 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.2-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-different-port-20220921221118-10174:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c -I lz4 -xf /preloaded.tar -C /extractDir
	I0921 22:11:25.638147  251080 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.2-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-different-port-20220921221118-10174:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c -I lz4 -xf /preloaded.tar -C /extractDir: (6.510027893s)
	I0921 22:11:25.638182  251080 kic.go:188] duration metric: took 6.510186 seconds to extract preloaded images to volume
	W0921 22:11:25.638326  251080 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0921 22:11:25.638433  251080 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0921 22:11:25.732843  251080 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-different-port-20220921221118-10174 --name default-k8s-different-port-20220921221118-10174 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220921221118-10174 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-different-port-20220921221118-10174 --network default-k8s-different-port-20220921221118-10174 --ip 192.168.85.2 --volume default-k8s-different-port-20220921221118-10174:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c
	I0921 22:11:26.149451  251080 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221118-10174 --format={{.State.Running}}
	I0921 22:11:26.176098  251080 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221118-10174 --format={{.State.Status}}
	I0921 22:11:26.201313  251080 cli_runner.go:164] Run: docker exec default-k8s-different-port-20220921221118-10174 stat /var/lib/dpkg/alternatives/iptables
	I0921 22:11:26.261131  251080 oci.go:144] the created container "default-k8s-different-port-20220921221118-10174" has a running status.
	I0921 22:11:26.261169  251080 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa...
	I0921 22:11:26.437655  251080 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0921 22:11:26.519667  251080 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221118-10174 --format={{.State.Status}}
	I0921 22:11:26.549062  251080 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0921 22:11:26.549102  251080 kic_runner.go:114] Args: [docker exec --privileged default-k8s-different-port-20220921221118-10174 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0921 22:11:26.638792  251080 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221118-10174 --format={{.State.Status}}
	I0921 22:11:26.669847  251080 machine.go:88] provisioning docker machine ...
	I0921 22:11:26.669895  251080 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20220921221118-10174"
	I0921 22:11:26.669965  251080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:11:26.697039  251080 main.go:134] libmachine: Using SSH client type: native
	I0921 22:11:26.697198  251080 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ec520] 0x7ef6a0 <nil>  [] 0s} 127.0.0.1 49418 <nil> <nil>}
	I0921 22:11:26.697217  251080 main.go:134] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20220921221118-10174 && echo "default-k8s-different-port-20220921221118-10174" | sudo tee /etc/hostname
	I0921 22:11:26.837603  251080 main.go:134] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20220921221118-10174
	
	I0921 22:11:26.837685  251080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:11:26.861819  251080 main.go:134] libmachine: Using SSH client type: native
	I0921 22:11:26.861990  251080 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ec520] 0x7ef6a0 <nil>  [] 0s} 127.0.0.1 49418 <nil> <nil>}
	I0921 22:11:26.862027  251080 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20220921221118-10174' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20220921221118-10174/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20220921221118-10174' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0921 22:11:26.991431  251080 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0921 22:11:26.991457  251080 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube}
	I0921 22:11:26.991475  251080 ubuntu.go:177] setting up certificates
	I0921 22:11:26.991485  251080 provision.go:83] configureAuth start
	I0921 22:11:26.991540  251080 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220921221118-10174
	I0921 22:11:27.016270  251080 provision.go:138] copyHostCerts
	I0921 22:11:27.016322  251080 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cert.pem, removing ...
	I0921 22:11:27.016333  251080 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cert.pem
	I0921 22:11:27.016404  251080 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cert.pem (1123 bytes)
	I0921 22:11:27.016484  251080 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/key.pem, removing ...
	I0921 22:11:27.016495  251080 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/key.pem
	I0921 22:11:27.016521  251080 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/key.pem (1675 bytes)
	I0921 22:11:27.016571  251080 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.pem, removing ...
	I0921 22:11:27.016579  251080 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.pem
	I0921 22:11:27.016602  251080 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.pem (1078 bytes)
	I0921 22:11:27.016655  251080 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20220921221118-10174 san=[192.168.85.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20220921221118-10174]
	I0921 22:11:27.144451  251080 provision.go:172] copyRemoteCerts
	I0921 22:11:27.144512  251080 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0921 22:11:27.144545  251080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:11:27.170137  251080 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49418 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa Username:docker}
	I0921 22:11:27.266755  251080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0921 22:11:27.283950  251080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server.pem --> /etc/docker/server.pem (1310 bytes)
	I0921 22:11:27.300984  251080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0921 22:11:27.317480  251080 provision.go:86] duration metric: configureAuth took 325.986117ms
	I0921 22:11:27.317504  251080 ubuntu.go:193] setting minikube options for container-runtime
	I0921 22:11:27.317672  251080 config.go:180] Loaded profile config "default-k8s-different-port-20220921221118-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.2
	I0921 22:11:27.317689  251080 machine.go:91] provisioned docker machine in 647.81218ms
	I0921 22:11:27.317695  251080 client.go:171] LocalClient.Create took 8.9469458s
	I0921 22:11:27.317730  251080 start.go:167] duration metric: libmachine.API.Create for "default-k8s-different-port-20220921221118-10174" took 8.947008533s
	I0921 22:11:27.317744  251080 start.go:300] post-start starting for "default-k8s-different-port-20220921221118-10174" (driver="docker")
	I0921 22:11:27.317749  251080 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0921 22:11:27.317788  251080 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0921 22:11:27.317835  251080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:11:27.343342  251080 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49418 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa Username:docker}
	I0921 22:11:27.435531  251080 ssh_runner.go:195] Run: cat /etc/os-release
	I0921 22:11:27.438295  251080 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0921 22:11:27.438325  251080 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0921 22:11:27.438342  251080 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0921 22:11:27.438356  251080 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0921 22:11:27.438371  251080 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/addons for local assets ...
	I0921 22:11:27.438424  251080 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files for local assets ...
	I0921 22:11:27.438521  251080 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/101742.pem -> 101742.pem in /etc/ssl/certs
	I0921 22:11:27.438630  251080 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0921 22:11:27.445223  251080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/101742.pem --> /etc/ssl/certs/101742.pem (1708 bytes)
	I0921 22:11:27.462414  251080 start.go:303] post-start completed in 144.661014ms
	I0921 22:11:27.462741  251080 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220921221118-10174
	I0921 22:11:27.489387  251080 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/config.json ...
	I0921 22:11:27.489723  251080 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:11:27.489786  251080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:11:27.514068  251080 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49418 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa Username:docker}
	I0921 22:11:27.604197  251080 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:11:27.608399  251080 start.go:128] duration metric: createHost completed in 9.240229808s
	I0921 22:11:27.608420  251080 start.go:83] releasing machines lock for "default-k8s-different-port-20220921221118-10174", held for 9.240389159s
	I0921 22:11:27.608527  251080 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220921221118-10174
	I0921 22:11:27.634524  251080 ssh_runner.go:195] Run: systemctl --version
	I0921 22:11:27.634570  251080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:11:27.634600  251080 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0921 22:11:27.634691  251080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:11:27.660182  251080 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49418 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa Username:docker}
	I0921 22:11:27.660873  251080 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49418 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa Username:docker}
	I0921 22:11:27.749037  251080 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0921 22:11:27.781889  251080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0921 22:11:27.791675  251080 docker.go:188] disabling docker service ...
	I0921 22:11:27.791773  251080 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0921 22:11:27.809646  251080 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0921 22:11:27.818739  251080 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0921 22:11:27.897618  251080 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0921 22:11:27.972484  251080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0921 22:11:27.982099  251080 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0921 22:11:27.995156  251080 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "registry.k8s.io/pause:3.8"|' -i /etc/containerd/config.toml"
	I0921 22:11:28.003109  251080 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0921 22:11:28.011124  251080 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0921 22:11:28.018761  251080 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0921 22:11:28.026807  251080 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0921 22:11:28.034371  251080 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0921 22:11:28.041097  251080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0921 22:11:28.122123  251080 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0921 22:11:28.202854  251080 start.go:450] Will wait 60s for socket path /run/containerd/containerd.sock
	I0921 22:11:28.202928  251080 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0921 22:11:28.206617  251080 start.go:471] Will wait 60s for crictl version
	I0921 22:11:28.206695  251080 ssh_runner.go:195] Run: sudo crictl version
	I0921 22:11:28.234745  251080 start.go:480] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.8
	RuntimeApiVersion:  v1alpha2
	I0921 22:11:28.234815  251080 ssh_runner.go:195] Run: containerd --version
	I0921 22:11:28.263806  251080 ssh_runner.go:195] Run: containerd --version
	I0921 22:11:28.295305  251080 out.go:177] * Preparing Kubernetes v1.25.2 on containerd 1.6.8 ...
	I0921 22:11:28.296662  251080 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220921221118-10174 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 22:11:28.320125  251080 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0921 22:11:28.323370  251080 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0921 22:11:28.333100  251080 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime containerd
	I0921 22:11:28.333171  251080 ssh_runner.go:195] Run: sudo crictl images --output json
	I0921 22:11:28.357788  251080 containerd.go:553] all images are preloaded for containerd runtime.
	I0921 22:11:28.357819  251080 containerd.go:467] Images already preloaded, skipping extraction
	I0921 22:11:28.357874  251080 ssh_runner.go:195] Run: sudo crictl images --output json
	I0921 22:11:28.381874  251080 containerd.go:553] all images are preloaded for containerd runtime.
	I0921 22:11:28.381894  251080 cache_images.go:84] Images are preloaded, skipping loading
	I0921 22:11:28.381937  251080 ssh_runner.go:195] Run: sudo crictl info
	I0921 22:11:28.408427  251080 cni.go:95] Creating CNI manager for ""
	I0921 22:11:28.408456  251080 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0921 22:11:28.408470  251080 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0921 22:11:28.408481  251080 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.25.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20220921221118-10174 NodeName:default-k8s-different-port-20220921221118-10174 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgr
oupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
	I0921 22:11:28.408605  251080 kubeadm.go:161] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "default-k8s-different-port-20220921221118-10174"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0921 22:11:28.408684  251080 kubeadm.go:962] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=default-k8s-different-port-20220921221118-10174 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.2 ClusterName:default-k8s-different-port-20220921221118-10174 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0921 22:11:28.408742  251080 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.2
	I0921 22:11:28.416363  251080 binaries.go:44] Found k8s binaries, skipping transfer
	I0921 22:11:28.416431  251080 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0921 22:11:28.423279  251080 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (540 bytes)
	I0921 22:11:28.435844  251080 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0921 22:11:28.448554  251080 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2076 bytes)
	I0921 22:11:28.461624  251080 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0921 22:11:28.464712  251080 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0921 22:11:28.474003  251080 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174 for IP: 192.168.85.2
	I0921 22:11:28.474126  251080 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.key
	I0921 22:11:28.474185  251080 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/proxy-client-ca.key
	I0921 22:11:28.474246  251080 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/client.key
	I0921 22:11:28.474266  251080 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/client.crt with IP's: []
	I0921 22:11:28.567465  251080 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/client.crt ...
	I0921 22:11:28.567491  251080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/client.crt: {Name:mk7f007abc18238b3f4d498b44323ac1c9a08dd4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:11:28.567699  251080 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/client.key ...
	I0921 22:11:28.567732  251080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/client.key: {Name:mk573406c706742430a89f6f7a356628c72d9a49 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:11:28.567860  251080 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/apiserver.key.43b9df8c
	I0921 22:11:28.567875  251080 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/apiserver.crt.43b9df8c with IP's: [192.168.85.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0921 22:11:28.821872  251080 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/apiserver.crt.43b9df8c ...
	I0921 22:11:28.821903  251080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/apiserver.crt.43b9df8c: {Name:mk6f9bf09d9a1574fea352675c579bd5b29a8c83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:11:28.822090  251080 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/apiserver.key.43b9df8c ...
	I0921 22:11:28.822105  251080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/apiserver.key.43b9df8c: {Name:mk02ae9ee31bcf5d402f8edd4ad6acaa82a351d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:11:28.822189  251080 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/apiserver.crt.43b9df8c -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/apiserver.crt
	I0921 22:11:28.822247  251080 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/apiserver.key.43b9df8c -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/apiserver.key
	I0921 22:11:28.822293  251080 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/proxy-client.key
	I0921 22:11:28.822308  251080 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/proxy-client.crt with IP's: []
	I0921 22:11:28.922715  251080 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/proxy-client.crt ...
	I0921 22:11:28.922741  251080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/proxy-client.crt: {Name:mkaf5c21db58b4a0b90357c15da03dae1abe71c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:11:28.922924  251080 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/proxy-client.key ...
	I0921 22:11:28.922938  251080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/proxy-client.key: {Name:mk91f1c41e1900ed0eb542cfae77ba7b1ff8febd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:11:28.923107  251080 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/10174.pem (1338 bytes)
	W0921 22:11:28.923145  251080 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/10174_empty.pem, impossibly tiny 0 bytes
	I0921 22:11:28.923157  251080 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca-key.pem (1679 bytes)
	I0921 22:11:28.923183  251080 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem (1078 bytes)
	I0921 22:11:28.923210  251080 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/cert.pem (1123 bytes)
	I0921 22:11:28.923233  251080 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/key.pem (1675 bytes)
	I0921 22:11:28.923271  251080 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/101742.pem (1708 bytes)
	I0921 22:11:28.923840  251080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0921 22:11:28.942334  251080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0921 22:11:28.959138  251080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0921 22:11:28.975925  251080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0921 22:11:28.992601  251080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0921 22:11:29.009145  251080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0921 22:11:29.025974  251080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0921 22:11:29.043889  251080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0921 22:11:29.061111  251080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/101742.pem --> /usr/share/ca-certificates/101742.pem (1708 bytes)
	I0921 22:11:29.078117  251080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0921 22:11:29.095326  251080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/10174.pem --> /usr/share/ca-certificates/10174.pem (1338 bytes)
	I0921 22:11:29.112457  251080 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0921 22:11:29.124660  251080 ssh_runner.go:195] Run: openssl version
	I0921 22:11:29.129304  251080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0921 22:11:29.136557  251080 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0921 22:11:29.139479  251080 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Sep 21 21:28 /usr/share/ca-certificates/minikubeCA.pem
	I0921 22:11:29.139517  251080 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0921 22:11:29.144088  251080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0921 22:11:29.151649  251080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10174.pem && ln -fs /usr/share/ca-certificates/10174.pem /etc/ssl/certs/10174.pem"
	I0921 22:11:29.158634  251080 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10174.pem
	I0921 22:11:29.161640  251080 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Sep 21 21:32 /usr/share/ca-certificates/10174.pem
	I0921 22:11:29.161682  251080 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10174.pem
	I0921 22:11:29.166192  251080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10174.pem /etc/ssl/certs/51391683.0"
	I0921 22:11:29.173529  251080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/101742.pem && ln -fs /usr/share/ca-certificates/101742.pem /etc/ssl/certs/101742.pem"
	I0921 22:11:29.181111  251080 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/101742.pem
	I0921 22:11:29.184130  251080 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Sep 21 21:32 /usr/share/ca-certificates/101742.pem
	I0921 22:11:29.184178  251080 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/101742.pem
	I0921 22:11:29.189023  251080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/101742.pem /etc/ssl/certs/3ec20f2e.0"
	I0921 22:11:29.196116  251080 kubeadm.go:396] StartCluster: {Name:default-k8s-different-port-20220921221118-10174 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:default-k8s-different-port-20220921221118-10174 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 22:11:29.196192  251080 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0921 22:11:29.196252  251080 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0921 22:11:29.220112  251080 cri.go:87] found id: ""
	I0921 22:11:29.220180  251080 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0921 22:11:29.227068  251080 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0921 22:11:29.234009  251080 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0921 22:11:29.234055  251080 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0921 22:11:29.240811  251080 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0921 22:11:29.240844  251080 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0921 22:11:29.281554  251080 kubeadm.go:317] [init] Using Kubernetes version: v1.25.2
	I0921 22:11:29.281632  251080 kubeadm.go:317] [preflight] Running pre-flight checks
	I0921 22:11:29.309304  251080 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I0921 22:11:29.309370  251080 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1017-gcp
	I0921 22:11:29.309403  251080 kubeadm.go:317] OS: Linux
	I0921 22:11:29.309445  251080 kubeadm.go:317] CGROUPS_CPU: enabled
	I0921 22:11:29.309491  251080 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I0921 22:11:29.309562  251080 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I0921 22:11:29.309615  251080 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I0921 22:11:29.309671  251080 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I0921 22:11:29.309719  251080 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I0921 22:11:29.309757  251080 kubeadm.go:317] CGROUPS_PIDS: enabled
	I0921 22:11:29.309798  251080 kubeadm.go:317] CGROUPS_HUGETLB: enabled
	I0921 22:11:29.309837  251080 kubeadm.go:317] CGROUPS_BLKIO: enabled
	I0921 22:11:29.374829  251080 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0921 22:11:29.374943  251080 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0921 22:11:29.375043  251080 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0921 22:11:29.498766  251080 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0921 22:11:29.501998  251080 out.go:204]   - Generating certificates and keys ...
	I0921 22:11:29.502140  251080 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0921 22:11:29.502277  251080 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0921 22:11:29.597971  251080 kubeadm.go:317] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0921 22:11:29.835986  251080 kubeadm.go:317] [certs] Generating "front-proxy-ca" certificate and key
	I0921 22:11:30.089547  251080 kubeadm.go:317] [certs] Generating "front-proxy-client" certificate and key
	I0921 22:11:30.169634  251080 kubeadm.go:317] [certs] Generating "etcd/ca" certificate and key
	I0921 22:11:30.225195  251080 kubeadm.go:317] [certs] Generating "etcd/server" certificate and key
	I0921 22:11:30.225404  251080 kubeadm.go:317] [certs] etcd/server serving cert is signed for DNS names [default-k8s-different-port-20220921221118-10174 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0921 22:11:30.334625  251080 kubeadm.go:317] [certs] Generating "etcd/peer" certificate and key
	I0921 22:11:30.334942  251080 kubeadm.go:317] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-different-port-20220921221118-10174 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0921 22:11:30.454648  251080 kubeadm.go:317] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0921 22:11:30.667751  251080 kubeadm.go:317] [certs] Generating "apiserver-etcd-client" certificate and key
	I0921 22:11:30.842577  251080 kubeadm.go:317] [certs] Generating "sa" key and public key
	I0921 22:11:30.842710  251080 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0921 22:11:30.909448  251080 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0921 22:11:31.056256  251080 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0921 22:11:31.120718  251080 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0921 22:11:31.191075  251080 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0921 22:11:31.202857  251080 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0921 22:11:31.203759  251080 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0921 22:11:31.203851  251080 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I0921 22:11:31.284919  251080 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0921 22:11:31.287269  251080 out.go:204]   - Booting up control plane ...
	I0921 22:11:31.287395  251080 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0921 22:11:31.288963  251080 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0921 22:11:31.289889  251080 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0921 22:11:31.290600  251080 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0921 22:11:31.292356  251080 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0921 22:11:37.294544  251080 kubeadm.go:317] [apiclient] All control plane components are healthy after 6.002117 seconds
	I0921 22:11:37.294700  251080 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0921 22:11:37.302999  251080 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0921 22:11:37.820634  251080 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs
	I0921 22:11:37.820909  251080 kubeadm.go:317] [mark-control-plane] Marking the node default-k8s-different-port-20220921221118-10174 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0921 22:11:38.328855  251080 kubeadm.go:317] [bootstrap-token] Using token: f60jp5.opo6lrzt47sur902
	I0921 22:11:38.330272  251080 out.go:204]   - Configuring RBAC rules ...
	I0921 22:11:38.330460  251080 kubeadm.go:317] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0921 22:11:38.335703  251080 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0921 22:11:38.340513  251080 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0921 22:11:38.342637  251080 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0921 22:11:38.344542  251080 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0921 22:11:38.346406  251080 kubeadm.go:317] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0921 22:11:38.353833  251080 kubeadm.go:317] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0921 22:11:38.556116  251080 kubeadm.go:317] [addons] Applied essential addon: CoreDNS
	I0921 22:11:38.780075  251080 kubeadm.go:317] [addons] Applied essential addon: kube-proxy
	I0921 22:11:38.781317  251080 kubeadm.go:317] 
	I0921 22:11:38.781428  251080 kubeadm.go:317] Your Kubernetes control-plane has initialized successfully!
	I0921 22:11:38.781465  251080 kubeadm.go:317] 
	I0921 22:11:38.781595  251080 kubeadm.go:317] To start using your cluster, you need to run the following as a regular user:
	I0921 22:11:38.781624  251080 kubeadm.go:317] 
	I0921 22:11:38.781667  251080 kubeadm.go:317]   mkdir -p $HOME/.kube
	I0921 22:11:38.781749  251080 kubeadm.go:317]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0921 22:11:38.781810  251080 kubeadm.go:317]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0921 22:11:38.781842  251080 kubeadm.go:317] 
	I0921 22:11:38.781971  251080 kubeadm.go:317] Alternatively, if you are the root user, you can run:
	I0921 22:11:38.781987  251080 kubeadm.go:317] 
	I0921 22:11:38.782044  251080 kubeadm.go:317]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0921 22:11:38.782061  251080 kubeadm.go:317] 
	I0921 22:11:38.782142  251080 kubeadm.go:317] You should now deploy a pod network to the cluster.
	I0921 22:11:38.782239  251080 kubeadm.go:317] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0921 22:11:38.782336  251080 kubeadm.go:317]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0921 22:11:38.782349  251080 kubeadm.go:317] 
	I0921 22:11:38.782445  251080 kubeadm.go:317] You can now join any number of control-plane nodes by copying certificate authorities
	I0921 22:11:38.782532  251080 kubeadm.go:317] and service account keys on each node and then running the following as root:
	I0921 22:11:38.782539  251080 kubeadm.go:317] 
	I0921 22:11:38.782640  251080 kubeadm.go:317]   kubeadm join control-plane.minikube.internal:8444 --token f60jp5.opo6lrzt47sur902 \
	I0921 22:11:38.782760  251080 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:b419e756666ce2e30bdac31039f21ad7242d26e2b9c03a55c5bc6fe8dcfddcf7 \
	I0921 22:11:38.782786  251080 kubeadm.go:317] 	--control-plane 
	I0921 22:11:38.782792  251080 kubeadm.go:317] 
	I0921 22:11:38.782886  251080 kubeadm.go:317] Then you can join any number of worker nodes by running the following on each as root:
	I0921 22:11:38.782893  251080 kubeadm.go:317] 
	I0921 22:11:38.782985  251080 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8444 --token f60jp5.opo6lrzt47sur902 \
	I0921 22:11:38.783105  251080 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:b419e756666ce2e30bdac31039f21ad7242d26e2b9c03a55c5bc6fe8dcfddcf7 
	I0921 22:11:38.785995  251080 kubeadm.go:317] W0921 22:11:29.273642     735 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0921 22:11:38.786254  251080 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1017-gcp\n", err: exit status 1
	I0921 22:11:38.786399  251080 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0921 22:11:38.786445  251080 cni.go:95] Creating CNI manager for ""
	I0921 22:11:38.786461  251080 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0921 22:11:38.788308  251080 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0921 22:11:38.789713  251080 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0921 22:11:38.793640  251080 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.2/kubectl ...
	I0921 22:11:38.793660  251080 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0921 22:11:38.808403  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0921 22:11:39.596042  251080 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0921 22:11:39.596097  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:39.596114  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl label nodes minikube.k8s.io/version=v1.27.0 minikube.k8s.io/commit=937c68716dfaac5b5ffa3b6655158d5d3472b8c4 minikube.k8s.io/name=default-k8s-different-port-20220921221118-10174 minikube.k8s.io/updated_at=2022_09_21T22_11_39_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:39.690430  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:39.696472  251080 ops.go:34] apiserver oom_adj: -16
	I0921 22:11:40.252956  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:40.753124  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:41.252898  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:41.752958  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:42.252749  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:42.752944  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:43.252940  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:43.752934  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:44.252898  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:44.752478  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:45.252903  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:45.752467  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:46.253256  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:46.752683  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:47.252892  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:47.752682  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:48.252790  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:48.752428  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:49.252346  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:49.753263  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:50.252919  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:50.752432  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:51.252537  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:51.752927  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:51.892900  251080 kubeadm.go:1067] duration metric: took 12.296861621s to wait for elevateKubeSystemPrivileges.
	I0921 22:11:51.892930  251080 kubeadm.go:398] StartCluster complete in 22.696819381s
	I0921 22:11:51.892946  251080 settings.go:142] acquiring lock: {Name:mk2e017b0c75e33bad2b3a546d00214b38a21694 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:11:51.893033  251080 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
	I0921 22:11:51.894853  251080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig: {Name:mkdec5faf1bb182b50a3cd5458b352f38075dc15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:11:52.410836  251080 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20220921221118-10174" rescaled to 1
	I0921 22:11:52.410900  251080 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0921 22:11:52.412753  251080 out.go:177] * Verifying Kubernetes components...
	I0921 22:11:52.410955  251080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0921 22:11:52.410996  251080 addons.go:412] enableAddons start: toEnable=map[], additional=[]
	I0921 22:11:52.411177  251080 config.go:180] Loaded profile config "default-k8s-different-port-20220921221118-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.2
	I0921 22:11:52.414055  251080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0921 22:11:52.414125  251080 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-different-port-20220921221118-10174"
	I0921 22:11:52.414149  251080 addons.go:153] Setting addon storage-provisioner=true in "default-k8s-different-port-20220921221118-10174"
	W0921 22:11:52.414157  251080 addons.go:162] addon storage-provisioner should already be in state true
	I0921 22:11:52.414160  251080 addons.go:65] Setting default-storageclass=true in profile "default-k8s-different-port-20220921221118-10174"
	I0921 22:11:52.414177  251080 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20220921221118-10174"
	I0921 22:11:52.414210  251080 host.go:66] Checking if "default-k8s-different-port-20220921221118-10174" exists ...
	I0921 22:11:52.414507  251080 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221118-10174 --format={{.State.Status}}
	I0921 22:11:52.414719  251080 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221118-10174 --format={{.State.Status}}
	I0921 22:11:52.453309  251080 addons.go:153] Setting addon default-storageclass=true in "default-k8s-different-port-20220921221118-10174"
	W0921 22:11:52.453343  251080 addons.go:162] addon default-storageclass should already be in state true
	I0921 22:11:52.453370  251080 host.go:66] Checking if "default-k8s-different-port-20220921221118-10174" exists ...
	I0921 22:11:52.456214  251080 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0921 22:11:52.453863  251080 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221118-10174 --format={{.State.Status}}
	I0921 22:11:52.457793  251080 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0921 22:11:52.457817  251080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0921 22:11:52.457870  251080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:11:52.489924  251080 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0921 22:11:52.489952  251080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0921 22:11:52.490001  251080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:11:52.499827  251080 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49418 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa Username:docker}
	I0921 22:11:52.521036  251080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0921 22:11:52.523074  251080 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20220921221118-10174" to be "Ready" ...
	I0921 22:11:52.524139  251080 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49418 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa Username:docker}
	I0921 22:11:52.694618  251080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0921 22:11:52.698974  251080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0921 22:11:53.100405  251080 start.go:810] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS
	I0921 22:11:53.285087  251080 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0921 22:11:53.286473  251080 addons.go:414] enableAddons completed in 875.500055ms
	I0921 22:11:54.531242  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:11:56.531286  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:11:59.030486  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:01.030832  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:03.031401  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:05.530847  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:08.030644  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:10.031510  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:12.531037  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:14.531388  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:17.030653  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:19.531491  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:22.030834  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:24.530794  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:27.030911  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:29.031263  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:31.531092  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:34.030989  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:36.530772  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:39.030837  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:41.030918  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:43.031395  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:45.530777  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:47.531054  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:49.531276  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:51.531467  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:54.030859  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:56.530994  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:59.030908  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:13:01.031504  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:13:03.531248  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:13:06.031222  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:13:08.530717  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:13:10.531407  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:13:13.030797  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:13:15.031195  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:13:17.531223  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:13:20.030902  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:13:22.531093  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:13:25.030827  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:13:27.031201  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:13:29.530660  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:13:32.030955  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:13:34.031037  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:13:36.031513  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:13:38.531161  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:13:41.030620  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:13:43.031380  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:13:45.530783  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:13:48.030568  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:13:50.031250  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:13:52.531321  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:13:55.031106  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:13:57.530922  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:14:00.030925  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:14:02.531359  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:14:05.030993  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:14:07.530797  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:14:09.530913  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:14:11.531450  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:14:14.030764  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:14:16.031283  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:14:18.031326  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:14:20.530765  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:14:23.031166  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:14:25.530744  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:14:27.531146  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:14:30.030823  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:14:32.031295  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:14:34.531158  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:14:37.031127  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:14:39.531189  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:14:42.031109  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:14:44.531144  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:14:47.031215  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:14:49.531212  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:14:52.031591  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:14:54.530764  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:14:56.531628  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:14:59.031443  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:15:01.530765  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:15:03.531199  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:15:06.031212  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:15:08.531535  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:15:11.030786  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:15:13.031313  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:15:15.531810  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:15:18.031388  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:15:20.531309  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:15:23.030497  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:15:25.031107  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:15:27.531379  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:15:30.030428  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:15:32.031245  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:15:34.531568  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:15:36.531614  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:15:39.031649  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:15:41.531271  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:15:44.031379  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:15:46.530779  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:15:48.531527  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:15:51.031482  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:15:52.532857  251080 node_ready.go:38] duration metric: took 4m0.009753586s waiting for node "default-k8s-different-port-20220921221118-10174" to be "Ready" ...
	I0921 22:15:52.535705  251080 out.go:177] 
	W0921 22:15:52.537201  251080 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0921 22:15:52.537217  251080 out.go:239] * 
	* 
	W0921 22:15:52.537962  251080 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0921 22:15:52.539605  251080 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p default-k8s-different-port-20220921221118-10174 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220921221118-10174
helpers_test.go:235: (dbg) docker inspect default-k8s-different-port-20220921221118-10174:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "37728b19138a640b956531d5c576658820e978bb8c95c5dc17cd7f348fdb8112",
	        "Created": "2022-09-21T22:11:25.759772693Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 251802,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-09-21T22:11:26.140466749Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f58fddaff4349397c9f51a6b73926a9b118af22b4ccb4492e84c74d0b59dcd4",
	        "ResolvConfPath": "/var/lib/docker/containers/37728b19138a640b956531d5c576658820e978bb8c95c5dc17cd7f348fdb8112/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/37728b19138a640b956531d5c576658820e978bb8c95c5dc17cd7f348fdb8112/hostname",
	        "HostsPath": "/var/lib/docker/containers/37728b19138a640b956531d5c576658820e978bb8c95c5dc17cd7f348fdb8112/hosts",
	        "LogPath": "/var/lib/docker/containers/37728b19138a640b956531d5c576658820e978bb8c95c5dc17cd7f348fdb8112/37728b19138a640b956531d5c576658820e978bb8c95c5dc17cd7f348fdb8112-json.log",
	        "Name": "/default-k8s-different-port-20220921221118-10174",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-different-port-20220921221118-10174:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-different-port-20220921221118-10174",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a8a0164abb3fd782df784bf984b10c980312d3270f36e5612924a87806ba5a19-init/diff:/var/lib/docker/overlay2/e464f9a636cb97832b8aa5685163afb26b14d037782877c3831e0f261b6fb12b/diff:/var/lib/docker/overlay2/6f2f230003f0711dfcff3390931e14f9abd39be1cd2b674079cb6a0fc99dab7f/diff:/var/lib/docker/overlay2/a539506727a1471a83b57694b6923a137699d9d47fe00601ffa27c34d63d17c2/diff:/var/lib/docker/overlay2/7dc48fda616494e9959c05191521a1f5bdf6be0555a35fc1d7ce85b87a0522ad/diff:/var/lib/docker/overlay2/6a1c06da85f8c03c9712693ca8a75d804e7b9f135a6b92e3929385cc791b0b3a/diff:/var/lib/docker/overlay2/d178ee6cf0c027c7943422dc71b38eb93ee131daede423f6cf283424eaebc517/diff:/var/lib/docker/overlay2/f4c8d81a89934a29db99d9ed4b4542a8882465ab2b6419ba4a3fe09214d25d9e/diff:/var/lib/docker/overlay2/4411575e37772974371f716b7c4abb3053138ddcaf53d883eb3116b4c3a37f78/diff:/var/lib/docker/overlay2/a3f8db0c0d790efcede2f41e470d293e280b022ff0bbb46c8bc25f190e3146fc/diff:/var/lib/docker/overlay2/a5bfd3
703378d6d5b2622598a13b4b2eb9203659c1ffff9aa93944ef8d20d22c/diff:/var/lib/docker/overlay2/3c24c9c8a37c1a488d12209df991b3857ccbc448a3329e46a8d48fc28ef81b83/diff:/var/lib/docker/overlay2/199771963b33bc809e912a6d428c556897f0ba04db2268e08695cb7ce0ee3bad/diff:/var/lib/docker/overlay2/4bc13570e456a5d261fbaf68140f2e5b2ea10c6683623858e16ee3ed1b117ef7/diff:/var/lib/docker/overlay2/9831fdfd44eff7e3f4780d10f5480f090b388735bb188e7a1e93251b7769d8f3/diff:/var/lib/docker/overlay2/28126fe31ddfbf99d4588c5ac750503b9b6cee8f2e00781941cff5f4323d1c76/diff:/var/lib/docker/overlay2/173221e8ff5f6ec22870429dc8bb01272ade638fe47275d1daa1ce07952c8c8c/diff:/var/lib/docker/overlay2/53bc2e4fa25c7c694765cf4d57e59552b26eed732ba59b8d74b221ea0ada7479/diff:/var/lib/docker/overlay2/0175a1cb7e1c92247b6cbac270da49d811d3508e193c1c2e47bef8cb769a47bf/diff:/var/lib/docker/overlay2/1651afeb98cf83af553f3e0a4dcb564af84cb440147702fddc0133cdae08e879/diff:/var/lib/docker/overlay2/64c0563be8f8902fe0e9d207f839be98da0e23d6d839403a7e5498f0468e6b75/diff:/var/lib/d
ocker/overlay2/c9ce1f7c22e681e0e279ce7656144b4c75e2d24f7297acad69e621f2756b75e8/diff:/var/lib/docker/overlay2/469e6b8bbc078383af3dc64c254906774ab56a833c4de1dd39b46bfa27fee3b0/diff:/var/lib/docker/overlay2/797536b923ada4261904843a51188063c401089eae5fec971f1dfb3c21d000a8/diff:/var/lib/docker/overlay2/9d5cbab97d75347795fc5814806362514e12ffc998b48411ecf1d23f980badf6/diff:/var/lib/docker/overlay2/8681f41e3df379cfdc8fd52b2e5837512b8e36a64c87fb159548a876d16e691d/diff:/var/lib/docker/overlay2/78ea1917a0aca968f65d796cbc2ee95d7d190a700496b9e47b29884fe13b1bec/diff:/var/lib/docker/overlay2/c3c231a74b7992f1fdc04b623c10d7791eab4fd97c803ed2610d595097c682c2/diff:/var/lib/docker/overlay2/f7b38a2a87d3958318f3a4b244e33d8096cd001a8fcb916eec022b0679382900/diff:/var/lib/docker/overlay2/1107be3397c1dfc460decaf307982d427cc431beb7938fb0e78f4f7cbc0afd3b/diff:/var/lib/docker/overlay2/59319d0c4cc381deff6e51ba1cc718b7cfd4fc557e74b8e83bd228afca501c8c/diff:/var/lib/docker/overlay2/9d031408a1ab9825246b7bba81db2596cb92e70fbedc70fba9be1d25320
8529f/diff:/var/lib/docker/overlay2/5800b717c814431baf3cc17520b099e7e011915cdeb46ed6360ec2f831a926ec/diff:/var/lib/docker/overlay2/6660d8dfc6dbb7f41f871ff7ec7e0fdfdb2ca3480141cdd4cce92869cb5382bf/diff:/var/lib/docker/overlay2/525a566b79110ff18e78880f2652b8d9cfcfbdbf3272fedb650933fcbbf869bd/diff:/var/lib/docker/overlay2/a0730aa8e0952fc661048d3a7f986ff246a5b2a29ad4e8195970bb407e3cecfc/diff:/var/lib/docker/overlay2/f9d25765ae914f5f013b5f46292f03c39d82f675d2102904f5fabecf07189337/diff:/var/lib/docker/overlay2/b7b34d126964ff60bc0fb625da7343e53e8bb4e77d5e72e33257c28b3d395ca3/diff:/var/lib/docker/overlay2/8514d6cc3775ebdd86f683a78503419bc97346c0059ac0d48563d2edec9727f0/diff:/var/lib/docker/overlay2/dd6f7bead5718d42040b4ee90ce09fa15720d04f3c58e2964b1d5b241bf5b7ed/diff:/var/lib/docker/overlay2/1a8adeb0355a42e9284e241b527930f4f6b7108d459c10550d5d0c4451f9e924/diff:/var/lib/docker/overlay2/703991449e0070d93945e4435da56144ce54e461e877e0b084144663d46b10f6/diff:/var/lib/docker/overlay2/65855711cfa445d374537e6268fd1bea2f62ea
bf2f590842933254b4755050dd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a8a0164abb3fd782df784bf984b10c980312d3270f36e5612924a87806ba5a19/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a8a0164abb3fd782df784bf984b10c980312d3270f36e5612924a87806ba5a19/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a8a0164abb3fd782df784bf984b10c980312d3270f36e5612924a87806ba5a19/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-different-port-20220921221118-10174",
	                "Source": "/var/lib/docker/volumes/default-k8s-different-port-20220921221118-10174/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-different-port-20220921221118-10174",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-different-port-20220921221118-10174",
	                "name.minikube.sigs.k8s.io": "default-k8s-different-port-20220921221118-10174",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2c16cce9402b8d39506117583a7fad80a94710d15dab294e1374d69074b6b894",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49418"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49417"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49414"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49416"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49415"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/2c16cce9402b",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-different-port-20220921221118-10174": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "37728b19138a",
	                        "default-k8s-different-port-20220921221118-10174"
	                    ],
	                    "NetworkID": "e093ea2ee154cf6d0e5d3b4a191700b36287f8ecd49e1b54f684a8f299ea6b79",
	                    "EndpointID": "adb7408d4c9675e8a8c7221c5c44296bade020a1fef2417db2c78e1b8536881c",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220921221118-10174 -n default-k8s-different-port-20220921221118-10174
helpers_test.go:244: <<< TestStartStop/group/default-k8s-different-port/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/FirstStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-different-port-20220921221118-10174 logs -n 25
helpers_test.go:252: TestStartStop/group/default-k8s-different-port/serial/FirstStart logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |                     Profile                     |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p cilium-20220921215524-10174                    | cilium-20220921215524-10174                     | jenkins | v1.27.0 | 21 Sep 22 21:59 UTC | 21 Sep 22 22:01 UTC |
	|         | --memory=2048                                     |                                                 |         |         |                     |                     |
	|         | --alsologtostderr                                 |                                                 |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                                                 |         |         |                     |                     |
	|         | --cni=cilium --driver=docker                      |                                                 |         |         |                     |                     |
	|         | --container-runtime=containerd                    |                                                 |         |         |                     |                     |
	| ssh     | -p auto-20220921215523-10174                      | auto-20220921215523-10174                       | jenkins | v1.27.0 | 21 Sep 22 21:59 UTC | 21 Sep 22 21:59 UTC |
	|         | pgrep -a kubelet                                  |                                                 |         |         |                     |                     |
	| delete  | -p auto-20220921215523-10174                      | auto-20220921215523-10174                       | jenkins | v1.27.0 | 21 Sep 22 21:59 UTC | 21 Sep 22 21:59 UTC |
	| start   | -p calico-20220921215524-10174                    | calico-20220921215524-10174                     | jenkins | v1.27.0 | 21 Sep 22 21:59 UTC |                     |
	|         | --memory=2048                                     |                                                 |         |         |                     |                     |
	|         | --alsologtostderr                                 |                                                 |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                                                 |         |         |                     |                     |
	|         | --cni=calico --driver=docker                      |                                                 |         |         |                     |                     |
	|         | --container-runtime=containerd                    |                                                 |         |         |                     |                     |
	| ssh     | -p                                                | kindnet-20220921215523-10174                    | jenkins | v1.27.0 | 21 Sep 22 21:59 UTC | 21 Sep 22 21:59 UTC |
	|         | kindnet-20220921215523-10174                      |                                                 |         |         |                     |                     |
	|         | pgrep -a kubelet                                  |                                                 |         |         |                     |                     |
	| delete  | -p                                                | kindnet-20220921215523-10174                    | jenkins | v1.27.0 | 21 Sep 22 21:59 UTC | 21 Sep 22 21:59 UTC |
	|         | kindnet-20220921215523-10174                      |                                                 |         |         |                     |                     |
	| start   | -p                                                | enable-default-cni-20220921215523-10174         | jenkins | v1.27.0 | 21 Sep 22 21:59 UTC | 21 Sep 22 22:04 UTC |
	|         | enable-default-cni-20220921215523-10174           |                                                 |         |         |                     |                     |
	|         | --memory=2048 --alsologtostderr                   |                                                 |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                                                 |         |         |                     |                     |
	|         | --enable-default-cni=true                         |                                                 |         |         |                     |                     |
	|         | --driver=docker                                   |                                                 |         |         |                     |                     |
	|         | --container-runtime=containerd                    |                                                 |         |         |                     |                     |
	| ssh     | -p cilium-20220921215524-10174                    | cilium-20220921215524-10174                     | jenkins | v1.27.0 | 21 Sep 22 22:01 UTC | 21 Sep 22 22:01 UTC |
	|         | pgrep -a kubelet                                  |                                                 |         |         |                     |                     |
	| delete  | -p cilium-20220921215524-10174                    | cilium-20220921215524-10174                     | jenkins | v1.27.0 | 21 Sep 22 22:01 UTC | 21 Sep 22 22:01 UTC |
	| start   | -p bridge-20220921215523-10174                    | bridge-20220921215523-10174                     | jenkins | v1.27.0 | 21 Sep 22 22:01 UTC | 21 Sep 22 22:01 UTC |
	|         | --memory=2048                                     |                                                 |         |         |                     |                     |
	|         | --alsologtostderr                                 |                                                 |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                                                 |         |         |                     |                     |
	|         | --cni=bridge --driver=docker                      |                                                 |         |         |                     |                     |
	|         | --container-runtime=containerd                    |                                                 |         |         |                     |                     |
	| ssh     | -p bridge-20220921215523-10174                    | bridge-20220921215523-10174                     | jenkins | v1.27.0 | 21 Sep 22 22:01 UTC | 21 Sep 22 22:01 UTC |
	|         | pgrep -a kubelet                                  |                                                 |         |         |                     |                     |
	| delete  | -p                                                | kubernetes-upgrade-20220921215522-10174         | jenkins | v1.27.0 | 21 Sep 22 22:04 UTC | 21 Sep 22 22:04 UTC |
	|         | kubernetes-upgrade-20220921215522-10174           |                                                 |         |         |                     |                     |
	| start   | -p                                                | embed-certs-20220921220439-10174                | jenkins | v1.27.0 | 21 Sep 22 22:04 UTC |                     |
	|         | embed-certs-20220921220439-10174                  |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |         |                     |                     |
	|         | --wait=true --embed-certs                         |                                                 |         |         |                     |                     |
	|         | --driver=docker                                   |                                                 |         |         |                     |                     |
	|         | --container-runtime=containerd                    |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.2                      |                                                 |         |         |                     |                     |
	| ssh     | -p                                                | enable-default-cni-20220921215523-10174         | jenkins | v1.27.0 | 21 Sep 22 22:04 UTC | 21 Sep 22 22:04 UTC |
	|         | enable-default-cni-20220921215523-10174           |                                                 |         |         |                     |                     |
	|         | pgrep -a kubelet                                  |                                                 |         |         |                     |                     |
	| delete  | -p bridge-20220921215523-10174                    | bridge-20220921215523-10174                     | jenkins | v1.27.0 | 21 Sep 22 22:07 UTC | 21 Sep 22 22:07 UTC |
	| start   | -p                                                | old-k8s-version-20220921220722-10174            | jenkins | v1.27.0 | 21 Sep 22 22:07 UTC | 21 Sep 22 22:09 UTC |
	|         | old-k8s-version-20220921220722-10174              |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                 |                                                 |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                                                 |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                                                 |         |         |                     |                     |
	|         | --keep-context=false --driver=docker              |                                                 |         |         |                     |                     |
	|         |  --container-runtime=containerd                   |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                      |                                                 |         |         |                     |                     |
	| delete  | -p calico-20220921215524-10174                    | calico-20220921215524-10174                     | jenkins | v1.27.0 | 21 Sep 22 22:08 UTC | 21 Sep 22 22:08 UTC |
	| delete  | -p                                                | disable-driver-mounts-20220921220831-10174      | jenkins | v1.27.0 | 21 Sep 22 22:08 UTC | 21 Sep 22 22:08 UTC |
	|         | disable-driver-mounts-20220921220831-10174        |                                                 |         |         |                     |                     |
	| start   | -p                                                | no-preload-20220921220832-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:08 UTC |                     |
	|         | no-preload-20220921220832-10174                   |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                                                 |         |         |                     |                     |
	|         | --driver=docker                                   |                                                 |         |         |                     |                     |
	|         | --container-runtime=containerd                    |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.2                      |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | old-k8s-version-20220921220722-10174            | jenkins | v1.27.0 | 21 Sep 22 22:09 UTC | 21 Sep 22 22:09 UTC |
	|         | old-k8s-version-20220921220722-10174              |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                                 |         |         |                     |                     |
	| stop    | -p                                                | old-k8s-version-20220921220722-10174            | jenkins | v1.27.0 | 21 Sep 22 22:09 UTC | 21 Sep 22 22:09 UTC |
	|         | old-k8s-version-20220921220722-10174              |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                               | old-k8s-version-20220921220722-10174            | jenkins | v1.27.0 | 21 Sep 22 22:09 UTC | 21 Sep 22 22:09 UTC |
	|         | old-k8s-version-20220921220722-10174              |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                 |         |         |                     |                     |
	| start   | -p                                                | old-k8s-version-20220921220722-10174            | jenkins | v1.27.0 | 21 Sep 22 22:09 UTC |                     |
	|         | old-k8s-version-20220921220722-10174              |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                 |                                                 |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                                                 |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                                                 |         |         |                     |                     |
	|         | --keep-context=false --driver=docker              |                                                 |         |         |                     |                     |
	|         |  --container-runtime=containerd                   |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                      |                                                 |         |         |                     |                     |
	| delete  | -p                                                | enable-default-cni-20220921215523-10174         | jenkins | v1.27.0 | 21 Sep 22 22:11 UTC | 21 Sep 22 22:11 UTC |
	|         | enable-default-cni-20220921215523-10174           |                                                 |         |         |                     |                     |
	| start   | -p                                                | default-k8s-different-port-20220921221118-10174 | jenkins | v1.27.0 | 21 Sep 22 22:11 UTC |                     |
	|         | default-k8s-different-port-20220921221118-10174   |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                 |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker             |                                                 |         |         |                     |                     |
	|         |  --container-runtime=containerd                   |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.2                      |                                                 |         |         |                     |                     |
	|---------|---------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/09/21 22:11:18
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.19.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0921 22:11:18.087901  251080 out.go:296] Setting OutFile to fd 1 ...
	I0921 22:11:18.088024  251080 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:11:18.088036  251080 out.go:309] Setting ErrFile to fd 2...
	I0921 22:11:18.088042  251080 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:11:18.088174  251080 root.go:333] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/bin
	I0921 22:11:18.088746  251080 out.go:303] Setting JSON to false
	I0921 22:11:18.090393  251080 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3229,"bootTime":1663795049,"procs":653,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1017-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0921 22:11:18.090456  251080 start.go:125] virtualization: kvm guest
	I0921 22:11:18.093408  251080 out.go:177] * [default-k8s-different-port-20220921221118-10174] minikube v1.27.0 on Ubuntu 20.04 (kvm/amd64)
	I0921 22:11:18.094844  251080 notify.go:214] Checking for updates...
	I0921 22:11:18.096337  251080 out.go:177]   - MINIKUBE_LOCATION=14995
	I0921 22:11:18.097775  251080 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0921 22:11:18.099219  251080 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
	I0921 22:11:18.100740  251080 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube
	I0921 22:11:18.102389  251080 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0921 22:11:18.104495  251080 config.go:180] Loaded profile config "embed-certs-20220921220439-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.2
	I0921 22:11:18.104651  251080 config.go:180] Loaded profile config "no-preload-20220921220832-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.2
	I0921 22:11:18.104807  251080 config.go:180] Loaded profile config "old-k8s-version-20220921220722-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0921 22:11:18.104881  251080 driver.go:365] Setting default libvirt URI to qemu:///system
	I0921 22:11:18.138312  251080 docker.go:137] docker version: linux-20.10.18
	I0921 22:11:18.138426  251080 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:11:18.232188  251080 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:true NGoroutines:57 SystemTime:2022-09-21 22:11:18.15986917 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1017-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.18 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientI
nfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0921 22:11:18.232324  251080 docker.go:254] overlay module found
	I0921 22:11:18.234351  251080 out.go:177] * Using the docker driver based on user configuration
	I0921 22:11:18.235767  251080 start.go:284] selected driver: docker
	I0921 22:11:18.235790  251080 start.go:808] validating driver "docker" against <nil>
	I0921 22:11:18.235809  251080 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0921 22:11:18.236643  251080 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:11:18.330559  251080 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:true NGoroutines:57 SystemTime:2022-09-21 22:11:18.257769036 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1017-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.18 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0921 22:11:18.330687  251080 start_flags.go:302] no existing cluster config was found, will generate one from the flags 
	I0921 22:11:18.330876  251080 start_flags.go:867] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0921 22:11:18.332978  251080 out.go:177] * Using Docker driver with root privileges
	I0921 22:11:18.334347  251080 cni.go:95] Creating CNI manager for ""
	I0921 22:11:18.334364  251080 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0921 22:11:18.334381  251080 start_flags.go:311] Found "CNI" CNI - setting NetworkPlugin=cni
	I0921 22:11:18.334405  251080 start_flags.go:316] config:
	{Name:default-k8s-different-port-20220921221118-10174 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:default-k8s-different-port-20220921221118-10174 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDo
main:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/soc
ket_vmnet}
	I0921 22:11:18.336049  251080 out.go:177] * Starting control plane node default-k8s-different-port-20220921221118-10174 in cluster default-k8s-different-port-20220921221118-10174
	I0921 22:11:18.337335  251080 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0921 22:11:18.338625  251080 out.go:177] * Pulling base image ...
	I0921 22:11:18.339915  251080 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime containerd
	I0921 22:11:18.339961  251080 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.2-containerd-overlay2-amd64.tar.lz4
	I0921 22:11:18.339976  251080 cache.go:57] Caching tarball of preloaded images
	I0921 22:11:18.340010  251080 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	I0921 22:11:18.340234  251080 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0921 22:11:18.340259  251080 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.2 on containerd
	I0921 22:11:18.340397  251080 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/config.json ...
	I0921 22:11:18.340430  251080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/config.json: {Name:mk68817f4bf887721f92775083cbcee80d5fb68a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:11:18.367818  251080 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon, skipping pull
	I0921 22:11:18.367843  251080 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c exists in daemon, skipping load
	I0921 22:11:18.367856  251080 cache.go:208] Successfully downloaded all kic artifacts
	I0921 22:11:18.367892  251080 start.go:364] acquiring machines lock for default-k8s-different-port-20220921221118-10174: {Name:mk6a2906d520bc1db61074ef435cf249d094e940 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:11:18.368018  251080 start.go:368] acquired machines lock for "default-k8s-different-port-20220921221118-10174" in 101.344µs
	I0921 22:11:18.368055  251080 start.go:93] Provisioning new machine with config: &{Name:default-k8s-different-port-20220921221118-10174 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:default-k8s-different-port-20220921221118-10174
Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8444 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0921 22:11:18.368157  251080 start.go:125] createHost starting for "" (driver="docker")
	I0921 22:11:17.425811  247121 kubeadm.go:778] kubelet initialised
	I0921 22:11:17.425835  247121 kubeadm.go:779] duration metric: took 58.431682599s waiting for restarted kubelet to initialise ...
	I0921 22:11:17.425842  247121 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0921 22:11:17.430135  247121 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-ft4dg" in "kube-system" namespace to be "Ready" ...
	I0921 22:11:17.434236  247121 pod_ready.go:92] pod "coredns-5644d7b6d9-ft4dg" in "kube-system" namespace has status "Ready":"True"
	I0921 22:11:17.434255  247121 pod_ready.go:81] duration metric: took 4.0995ms waiting for pod "coredns-5644d7b6d9-ft4dg" in "kube-system" namespace to be "Ready" ...
	I0921 22:11:17.434264  247121 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-mvb9z" in "kube-system" namespace to be "Ready" ...
	I0921 22:11:17.438049  247121 pod_ready.go:92] pod "coredns-5644d7b6d9-mvb9z" in "kube-system" namespace has status "Ready":"True"
	I0921 22:11:17.438070  247121 pod_ready.go:81] duration metric: took 3.799088ms waiting for pod "coredns-5644d7b6d9-mvb9z" in "kube-system" namespace to be "Ready" ...
	I0921 22:11:17.438084  247121 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-20220921220722-10174" in "kube-system" namespace to be "Ready" ...
	I0921 22:11:17.441889  247121 pod_ready.go:92] pod "etcd-old-k8s-version-20220921220722-10174" in "kube-system" namespace has status "Ready":"True"
	I0921 22:11:17.441906  247121 pod_ready.go:81] duration metric: took 3.813836ms waiting for pod "etcd-old-k8s-version-20220921220722-10174" in "kube-system" namespace to be "Ready" ...
	I0921 22:11:17.441918  247121 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-20220921220722-10174" in "kube-system" namespace to be "Ready" ...
	I0921 22:11:17.445604  247121 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-20220921220722-10174" in "kube-system" namespace has status "Ready":"True"
	I0921 22:11:17.445626  247121 pod_ready.go:81] duration metric: took 3.699251ms waiting for pod "kube-apiserver-old-k8s-version-20220921220722-10174" in "kube-system" namespace to be "Ready" ...
	I0921 22:11:17.445637  247121 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-20220921220722-10174" in "kube-system" namespace to be "Ready" ...
	I0921 22:11:17.825354  247121 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-20220921220722-10174" in "kube-system" namespace has status "Ready":"True"
	I0921 22:11:17.825379  247121 pod_ready.go:81] duration metric: took 379.733387ms waiting for pod "kube-controller-manager-old-k8s-version-20220921220722-10174" in "kube-system" namespace to be "Ready" ...
	I0921 22:11:17.825389  247121 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-fxg44" in "kube-system" namespace to be "Ready" ...
	I0921 22:11:18.225967  247121 pod_ready.go:92] pod "kube-proxy-fxg44" in "kube-system" namespace has status "Ready":"True"
	I0921 22:11:18.225996  247121 pod_ready.go:81] duration metric: took 400.60033ms waiting for pod "kube-proxy-fxg44" in "kube-system" namespace to be "Ready" ...
	I0921 22:11:18.226010  247121 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-20220921220722-10174" in "kube-system" namespace to be "Ready" ...
	I0921 22:11:18.625047  247121 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-20220921220722-10174" in "kube-system" namespace has status "Ready":"True"
	I0921 22:11:18.625076  247121 pod_ready.go:81] duration metric: took 399.057463ms waiting for pod "kube-scheduler-old-k8s-version-20220921220722-10174" in "kube-system" namespace to be "Ready" ...
	I0921 22:11:18.625094  247121 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace to be "Ready" ...
	I0921 22:11:18.837224  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:11:21.337131  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:11:18.370528  251080 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0921 22:11:18.370720  251080 start.go:159] libmachine.API.Create for "default-k8s-different-port-20220921221118-10174" (driver="docker")
	I0921 22:11:18.370744  251080 client.go:168] LocalClient.Create starting
	I0921 22:11:18.370817  251080 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem
	I0921 22:11:18.370845  251080 main.go:134] libmachine: Decoding PEM data...
	I0921 22:11:18.370861  251080 main.go:134] libmachine: Parsing certificate...
	I0921 22:11:18.370925  251080 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/cert.pem
	I0921 22:11:18.370944  251080 main.go:134] libmachine: Decoding PEM data...
	I0921 22:11:18.370953  251080 main.go:134] libmachine: Parsing certificate...
	I0921 22:11:18.371236  251080 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220921221118-10174 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0921 22:11:18.395515  251080 cli_runner.go:211] docker network inspect default-k8s-different-port-20220921221118-10174 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0921 22:11:18.395579  251080 network_create.go:272] running [docker network inspect default-k8s-different-port-20220921221118-10174] to gather additional debugging logs...
	I0921 22:11:18.395600  251080 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220921221118-10174
	W0921 22:11:18.419547  251080 cli_runner.go:211] docker network inspect default-k8s-different-port-20220921221118-10174 returned with exit code 1
	I0921 22:11:18.419579  251080 network_create.go:275] error running [docker network inspect default-k8s-different-port-20220921221118-10174]: docker network inspect default-k8s-different-port-20220921221118-10174: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: default-k8s-different-port-20220921221118-10174
	I0921 22:11:18.419591  251080 network_create.go:277] output of [docker network inspect default-k8s-different-port-20220921221118-10174]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: default-k8s-different-port-20220921221118-10174
	
	** /stderr **
	I0921 22:11:18.419643  251080 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 22:11:18.444258  251080 network.go:241] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-b7c23e57d062 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:a3:39:9d:03}}
	I0921 22:11:18.445274  251080 network.go:241] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-bfa8cb3d5f9b IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:8c:39:36:0c}}
	I0921 22:11:18.446196  251080 network.go:241] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName:br-e71aa30fd3ac IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:7a:b1:c8:c1}}
	I0921 22:11:18.447244  251080 network.go:241] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName:br-4f93bc2f061a IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:ca:b2:42:ce}}
	I0921 22:11:18.448755  251080 network.go:290] reserving subnet 192.168.85.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.85.0:0xc00012cb10] misses:0}
	I0921 22:11:18.448802  251080 network.go:236] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0921 22:11:18.448826  251080 network_create.go:115] attempt to create docker network default-k8s-different-port-20220921221118-10174 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0921 22:11:18.448915  251080 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-different-port-20220921221118-10174 default-k8s-different-port-20220921221118-10174
	I0921 22:11:18.510820  251080 network_create.go:99] docker network default-k8s-different-port-20220921221118-10174 192.168.85.0/24 created
	I0921 22:11:18.510857  251080 kic.go:106] calculated static IP "192.168.85.2" for the "default-k8s-different-port-20220921221118-10174" container
	I0921 22:11:18.510919  251080 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0921 22:11:18.536329  251080 cli_runner.go:164] Run: docker volume create default-k8s-different-port-20220921221118-10174 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220921221118-10174 --label created_by.minikube.sigs.k8s.io=true
	I0921 22:11:18.561443  251080 oci.go:103] Successfully created a docker volume default-k8s-different-port-20220921221118-10174
	I0921 22:11:18.561538  251080 cli_runner.go:164] Run: docker run --rm --name default-k8s-different-port-20220921221118-10174-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220921221118-10174 --entrypoint /usr/bin/test -v default-k8s-different-port-20220921221118-10174:/var gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c -d /var/lib
	I0921 22:11:19.127923  251080 oci.go:107] Successfully prepared a docker volume default-k8s-different-port-20220921221118-10174
	I0921 22:11:19.127974  251080 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime containerd
	I0921 22:11:19.127994  251080 kic.go:179] Starting extracting preloaded images to volume ...
	I0921 22:11:19.128049  251080 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.2-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-different-port-20220921221118-10174:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c -I lz4 -xf /preloaded.tar -C /extractDir
	I0921 22:11:21.030814  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:11:23.030888  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:11:23.836773  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:11:25.837620  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:11:25.638147  251080 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.2-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-different-port-20220921221118-10174:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c -I lz4 -xf /preloaded.tar -C /extractDir: (6.510027893s)
	I0921 22:11:25.638182  251080 kic.go:188] duration metric: took 6.510186 seconds to extract preloaded images to volume
	W0921 22:11:25.638326  251080 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0921 22:11:25.638433  251080 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0921 22:11:25.732843  251080 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-different-port-20220921221118-10174 --name default-k8s-different-port-20220921221118-10174 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220921221118-10174 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-different-port-20220921221118-10174 --network default-k8s-different-port-20220921221118-10174 --ip 192.168.85.2 --volume default-k8s-different-port-20220921221118-10174:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c
	I0921 22:11:26.149451  251080 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221118-10174 --format={{.State.Running}}
	I0921 22:11:26.176098  251080 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221118-10174 --format={{.State.Status}}
	I0921 22:11:26.201313  251080 cli_runner.go:164] Run: docker exec default-k8s-different-port-20220921221118-10174 stat /var/lib/dpkg/alternatives/iptables
	I0921 22:11:26.261131  251080 oci.go:144] the created container "default-k8s-different-port-20220921221118-10174" has a running status.
	I0921 22:11:26.261169  251080 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa...
	I0921 22:11:26.437655  251080 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0921 22:11:26.519667  251080 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221118-10174 --format={{.State.Status}}
	I0921 22:11:26.549062  251080 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0921 22:11:26.549102  251080 kic_runner.go:114] Args: [docker exec --privileged default-k8s-different-port-20220921221118-10174 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0921 22:11:26.638792  251080 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221118-10174 --format={{.State.Status}}
	I0921 22:11:26.669847  251080 machine.go:88] provisioning docker machine ...
	I0921 22:11:26.669895  251080 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20220921221118-10174"
	I0921 22:11:26.669965  251080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:11:26.697039  251080 main.go:134] libmachine: Using SSH client type: native
	I0921 22:11:26.697198  251080 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ec520] 0x7ef6a0 <nil>  [] 0s} 127.0.0.1 49418 <nil> <nil>}
	I0921 22:11:26.697217  251080 main.go:134] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20220921221118-10174 && echo "default-k8s-different-port-20220921221118-10174" | sudo tee /etc/hostname
	I0921 22:11:26.837603  251080 main.go:134] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20220921221118-10174
	
	I0921 22:11:26.837685  251080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:11:26.861819  251080 main.go:134] libmachine: Using SSH client type: native
	I0921 22:11:26.861990  251080 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ec520] 0x7ef6a0 <nil>  [] 0s} 127.0.0.1 49418 <nil> <nil>}
	I0921 22:11:26.862027  251080 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20220921221118-10174' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20220921221118-10174/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20220921221118-10174' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0921 22:11:26.991431  251080 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0921 22:11:26.991457  251080 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube}
	I0921 22:11:26.991475  251080 ubuntu.go:177] setting up certificates
	I0921 22:11:26.991485  251080 provision.go:83] configureAuth start
	I0921 22:11:26.991540  251080 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220921221118-10174
	I0921 22:11:27.016270  251080 provision.go:138] copyHostCerts
	I0921 22:11:27.016322  251080 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cert.pem, removing ...
	I0921 22:11:27.016333  251080 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cert.pem
	I0921 22:11:27.016404  251080 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cert.pem (1123 bytes)
	I0921 22:11:27.016484  251080 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/key.pem, removing ...
	I0921 22:11:27.016495  251080 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/key.pem
	I0921 22:11:27.016521  251080 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/key.pem (1675 bytes)
	I0921 22:11:27.016571  251080 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.pem, removing ...
	I0921 22:11:27.016579  251080 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.pem
	I0921 22:11:27.016602  251080 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.pem (1078 bytes)
	I0921 22:11:27.016655  251080 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20220921221118-10174 san=[192.168.85.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20220921221118-10174]
	I0921 22:11:27.144451  251080 provision.go:172] copyRemoteCerts
	I0921 22:11:27.144512  251080 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0921 22:11:27.144545  251080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:11:27.170137  251080 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49418 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa Username:docker}
	I0921 22:11:27.266755  251080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0921 22:11:27.283950  251080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server.pem --> /etc/docker/server.pem (1310 bytes)
	I0921 22:11:27.300984  251080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0921 22:11:27.317480  251080 provision.go:86] duration metric: configureAuth took 325.986117ms
	I0921 22:11:27.317504  251080 ubuntu.go:193] setting minikube options for container-runtime
	I0921 22:11:27.317672  251080 config.go:180] Loaded profile config "default-k8s-different-port-20220921221118-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.2
	I0921 22:11:27.317689  251080 machine.go:91] provisioned docker machine in 647.81218ms
	I0921 22:11:27.317695  251080 client.go:171] LocalClient.Create took 8.9469458s
	I0921 22:11:27.317730  251080 start.go:167] duration metric: libmachine.API.Create for "default-k8s-different-port-20220921221118-10174" took 8.947008533s
	I0921 22:11:27.317744  251080 start.go:300] post-start starting for "default-k8s-different-port-20220921221118-10174" (driver="docker")
	I0921 22:11:27.317749  251080 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0921 22:11:27.317788  251080 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0921 22:11:27.317835  251080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:11:27.343342  251080 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49418 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa Username:docker}
	I0921 22:11:27.435531  251080 ssh_runner.go:195] Run: cat /etc/os-release
	I0921 22:11:27.438295  251080 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0921 22:11:27.438325  251080 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0921 22:11:27.438342  251080 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0921 22:11:27.438356  251080 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0921 22:11:27.438371  251080 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/addons for local assets ...
	I0921 22:11:27.438424  251080 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files for local assets ...
	I0921 22:11:27.438521  251080 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/101742.pem -> 101742.pem in /etc/ssl/certs
	I0921 22:11:27.438630  251080 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0921 22:11:27.445223  251080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/101742.pem --> /etc/ssl/certs/101742.pem (1708 bytes)
	I0921 22:11:27.462414  251080 start.go:303] post-start completed in 144.661014ms
	I0921 22:11:27.462741  251080 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220921221118-10174
	I0921 22:11:27.489387  251080 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/config.json ...
	I0921 22:11:27.489723  251080 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:11:27.489786  251080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:11:27.514068  251080 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49418 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa Username:docker}
	I0921 22:11:27.604197  251080 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:11:27.608399  251080 start.go:128] duration metric: createHost completed in 9.240229808s
	I0921 22:11:27.608420  251080 start.go:83] releasing machines lock for "default-k8s-different-port-20220921221118-10174", held for 9.240389159s
	I0921 22:11:27.608527  251080 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220921221118-10174
	I0921 22:11:27.634524  251080 ssh_runner.go:195] Run: systemctl --version
	I0921 22:11:27.634570  251080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:11:27.634600  251080 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0921 22:11:27.634691  251080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:11:27.660182  251080 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49418 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa Username:docker}
	I0921 22:11:27.660873  251080 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49418 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa Username:docker}
	I0921 22:11:27.749037  251080 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0921 22:11:27.781889  251080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0921 22:11:27.791675  251080 docker.go:188] disabling docker service ...
	I0921 22:11:27.791773  251080 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0921 22:11:27.809646  251080 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0921 22:11:27.818739  251080 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0921 22:11:27.897618  251080 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0921 22:11:27.972484  251080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0921 22:11:27.982099  251080 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0921 22:11:27.995156  251080 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "registry.k8s.io/pause:3.8"|' -i /etc/containerd/config.toml"
	I0921 22:11:28.003109  251080 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0921 22:11:28.011124  251080 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0921 22:11:28.018761  251080 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0921 22:11:28.026807  251080 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0921 22:11:28.034371  251080 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0921 22:11:28.041097  251080 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0921 22:11:28.122123  251080 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0921 22:11:28.202854  251080 start.go:450] Will wait 60s for socket path /run/containerd/containerd.sock
	I0921 22:11:28.202928  251080 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0921 22:11:28.206617  251080 start.go:471] Will wait 60s for crictl version
	I0921 22:11:28.206695  251080 ssh_runner.go:195] Run: sudo crictl version
	I0921 22:11:28.234745  251080 start.go:480] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.8
	RuntimeApiVersion:  v1alpha2
	I0921 22:11:28.234815  251080 ssh_runner.go:195] Run: containerd --version
	I0921 22:11:28.263806  251080 ssh_runner.go:195] Run: containerd --version
	I0921 22:11:28.295305  251080 out.go:177] * Preparing Kubernetes v1.25.2 on containerd 1.6.8 ...
	I0921 22:11:28.296662  251080 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220921221118-10174 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 22:11:28.320125  251080 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0921 22:11:28.323370  251080 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0921 22:11:28.333100  251080 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime containerd
	I0921 22:11:28.333171  251080 ssh_runner.go:195] Run: sudo crictl images --output json
	I0921 22:11:28.357788  251080 containerd.go:553] all images are preloaded for containerd runtime.
	I0921 22:11:28.357819  251080 containerd.go:467] Images already preloaded, skipping extraction
	I0921 22:11:28.357874  251080 ssh_runner.go:195] Run: sudo crictl images --output json
	I0921 22:11:28.381874  251080 containerd.go:553] all images are preloaded for containerd runtime.
	I0921 22:11:28.381894  251080 cache_images.go:84] Images are preloaded, skipping loading
	I0921 22:11:28.381937  251080 ssh_runner.go:195] Run: sudo crictl info
	I0921 22:11:28.408427  251080 cni.go:95] Creating CNI manager for ""
	I0921 22:11:28.408456  251080 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0921 22:11:28.408470  251080 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0921 22:11:28.408481  251080 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.25.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20220921221118-10174 NodeName:default-k8s-different-port-20220921221118-10174 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgr
oupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
	I0921 22:11:28.408605  251080 kubeadm.go:161] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "default-k8s-different-port-20220921221118-10174"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0921 22:11:28.408684  251080 kubeadm.go:962] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=default-k8s-different-port-20220921221118-10174 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.2 ClusterName:default-k8s-different-port-20220921221118-10174 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0921 22:11:28.408742  251080 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.2
	I0921 22:11:28.416363  251080 binaries.go:44] Found k8s binaries, skipping transfer
	I0921 22:11:28.416431  251080 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0921 22:11:28.423279  251080 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (540 bytes)
	I0921 22:11:28.435844  251080 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0921 22:11:28.448554  251080 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2076 bytes)
	I0921 22:11:28.461624  251080 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0921 22:11:28.464712  251080 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0921 22:11:28.474003  251080 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174 for IP: 192.168.85.2
	I0921 22:11:28.474126  251080 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.key
	I0921 22:11:28.474185  251080 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/proxy-client-ca.key
	I0921 22:11:28.474246  251080 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/client.key
	I0921 22:11:28.474266  251080 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/client.crt with IP's: []
	I0921 22:11:28.567465  251080 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/client.crt ...
	I0921 22:11:28.567491  251080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/client.crt: {Name:mk7f007abc18238b3f4d498b44323ac1c9a08dd4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:11:28.567699  251080 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/client.key ...
	I0921 22:11:28.567732  251080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/client.key: {Name:mk573406c706742430a89f6f7a356628c72d9a49 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:11:28.567860  251080 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/apiserver.key.43b9df8c
	I0921 22:11:28.567875  251080 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/apiserver.crt.43b9df8c with IP's: [192.168.85.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0921 22:11:28.821872  251080 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/apiserver.crt.43b9df8c ...
	I0921 22:11:28.821903  251080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/apiserver.crt.43b9df8c: {Name:mk6f9bf09d9a1574fea352675c579bd5b29a8c83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:11:28.822090  251080 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/apiserver.key.43b9df8c ...
	I0921 22:11:28.822105  251080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/apiserver.key.43b9df8c: {Name:mk02ae9ee31bcf5d402f8edd4ad6acaa82a351d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:11:28.822189  251080 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/apiserver.crt.43b9df8c -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/apiserver.crt
	I0921 22:11:28.822247  251080 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/apiserver.key.43b9df8c -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/apiserver.key
	I0921 22:11:28.822293  251080 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/proxy-client.key
	I0921 22:11:28.822308  251080 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/proxy-client.crt with IP's: []
	I0921 22:11:28.922715  251080 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/proxy-client.crt ...
	I0921 22:11:28.922741  251080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/proxy-client.crt: {Name:mkaf5c21db58b4a0b90357c15da03dae1abe71c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:11:28.922924  251080 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/proxy-client.key ...
	I0921 22:11:28.922938  251080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/proxy-client.key: {Name:mk91f1c41e1900ed0eb542cfae77ba7b1ff8febd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:11:28.923107  251080 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/10174.pem (1338 bytes)
	W0921 22:11:28.923145  251080 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/10174_empty.pem, impossibly tiny 0 bytes
	I0921 22:11:28.923157  251080 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca-key.pem (1679 bytes)
	I0921 22:11:28.923183  251080 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem (1078 bytes)
	I0921 22:11:28.923210  251080 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/cert.pem (1123 bytes)
	I0921 22:11:28.923233  251080 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/key.pem (1675 bytes)
	I0921 22:11:28.923271  251080 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/101742.pem (1708 bytes)
	I0921 22:11:28.923840  251080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0921 22:11:28.942334  251080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0921 22:11:28.959138  251080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0921 22:11:28.975925  251080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0921 22:11:28.992601  251080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0921 22:11:29.009145  251080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0921 22:11:29.025974  251080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0921 22:11:29.043889  251080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0921 22:11:29.061111  251080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/101742.pem --> /usr/share/ca-certificates/101742.pem (1708 bytes)
	I0921 22:11:29.078117  251080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0921 22:11:29.095326  251080 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/10174.pem --> /usr/share/ca-certificates/10174.pem (1338 bytes)
	I0921 22:11:29.112457  251080 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0921 22:11:29.124660  251080 ssh_runner.go:195] Run: openssl version
	I0921 22:11:29.129304  251080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0921 22:11:29.136557  251080 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0921 22:11:29.139479  251080 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Sep 21 21:28 /usr/share/ca-certificates/minikubeCA.pem
	I0921 22:11:29.139517  251080 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0921 22:11:29.144088  251080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0921 22:11:29.151649  251080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10174.pem && ln -fs /usr/share/ca-certificates/10174.pem /etc/ssl/certs/10174.pem"
	I0921 22:11:29.158634  251080 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10174.pem
	I0921 22:11:29.161640  251080 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Sep 21 21:32 /usr/share/ca-certificates/10174.pem
	I0921 22:11:29.161682  251080 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10174.pem
	I0921 22:11:29.166192  251080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10174.pem /etc/ssl/certs/51391683.0"
	I0921 22:11:29.173529  251080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/101742.pem && ln -fs /usr/share/ca-certificates/101742.pem /etc/ssl/certs/101742.pem"
	I0921 22:11:29.181111  251080 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/101742.pem
	I0921 22:11:29.184130  251080 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Sep 21 21:32 /usr/share/ca-certificates/101742.pem
	I0921 22:11:29.184178  251080 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/101742.pem
	I0921 22:11:29.189023  251080 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/101742.pem /etc/ssl/certs/3ec20f2e.0"
	I0921 22:11:29.196116  251080 kubeadm.go:396] StartCluster: {Name:default-k8s-different-port-20220921221118-10174 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:default-k8s-different-port-20220921221118-10174 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 22:11:29.196192  251080 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0921 22:11:29.196252  251080 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0921 22:11:29.220112  251080 cri.go:87] found id: ""
	I0921 22:11:29.220180  251080 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0921 22:11:29.227068  251080 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0921 22:11:29.234009  251080 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0921 22:11:29.234055  251080 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0921 22:11:29.240811  251080 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0921 22:11:29.240844  251080 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0921 22:11:29.281554  251080 kubeadm.go:317] [init] Using Kubernetes version: v1.25.2
	I0921 22:11:29.281632  251080 kubeadm.go:317] [preflight] Running pre-flight checks
	I0921 22:11:29.309304  251080 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I0921 22:11:29.309370  251080 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1017-gcp
	I0921 22:11:29.309403  251080 kubeadm.go:317] OS: Linux
	I0921 22:11:29.309445  251080 kubeadm.go:317] CGROUPS_CPU: enabled
	I0921 22:11:29.309491  251080 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I0921 22:11:29.309562  251080 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I0921 22:11:29.309615  251080 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I0921 22:11:29.309671  251080 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I0921 22:11:29.309719  251080 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I0921 22:11:29.309757  251080 kubeadm.go:317] CGROUPS_PIDS: enabled
	I0921 22:11:29.309798  251080 kubeadm.go:317] CGROUPS_HUGETLB: enabled
	I0921 22:11:29.309837  251080 kubeadm.go:317] CGROUPS_BLKIO: enabled
	I0921 22:11:29.374829  251080 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0921 22:11:29.374943  251080 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0921 22:11:29.375043  251080 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0921 22:11:29.498766  251080 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0921 22:11:25.530784  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:11:28.030733  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:11:30.031206  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:11:29.501998  251080 out.go:204]   - Generating certificates and keys ...
	I0921 22:11:29.502140  251080 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0921 22:11:29.502277  251080 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0921 22:11:29.597971  251080 kubeadm.go:317] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0921 22:11:29.835986  251080 kubeadm.go:317] [certs] Generating "front-proxy-ca" certificate and key
	I0921 22:11:30.089547  251080 kubeadm.go:317] [certs] Generating "front-proxy-client" certificate and key
	I0921 22:11:30.169634  251080 kubeadm.go:317] [certs] Generating "etcd/ca" certificate and key
	I0921 22:11:30.225195  251080 kubeadm.go:317] [certs] Generating "etcd/server" certificate and key
	I0921 22:11:30.225404  251080 kubeadm.go:317] [certs] etcd/server serving cert is signed for DNS names [default-k8s-different-port-20220921221118-10174 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0921 22:11:30.334625  251080 kubeadm.go:317] [certs] Generating "etcd/peer" certificate and key
	I0921 22:11:30.334942  251080 kubeadm.go:317] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-different-port-20220921221118-10174 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0921 22:11:30.454648  251080 kubeadm.go:317] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0921 22:11:30.667751  251080 kubeadm.go:317] [certs] Generating "apiserver-etcd-client" certificate and key
	I0921 22:11:30.842577  251080 kubeadm.go:317] [certs] Generating "sa" key and public key
	I0921 22:11:30.842710  251080 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0921 22:11:30.909448  251080 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0921 22:11:31.056256  251080 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0921 22:11:31.120718  251080 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0921 22:11:31.191075  251080 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0921 22:11:31.202857  251080 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0921 22:11:31.203759  251080 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0921 22:11:31.203851  251080 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I0921 22:11:31.284919  251080 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0921 22:11:28.336967  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:11:30.337024  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:11:31.287269  251080 out.go:204]   - Booting up control plane ...
	I0921 22:11:31.287395  251080 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0921 22:11:31.288963  251080 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0921 22:11:31.289889  251080 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0921 22:11:31.290600  251080 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0921 22:11:31.292356  251080 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0921 22:11:32.530623  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:11:35.030218  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:11:32.337947  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:11:34.836321  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:11:36.837370  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:11:37.294544  251080 kubeadm.go:317] [apiclient] All control plane components are healthy after 6.002117 seconds
	I0921 22:11:37.294700  251080 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0921 22:11:37.302999  251080 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0921 22:11:37.820634  251080 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs
	I0921 22:11:37.820909  251080 kubeadm.go:317] [mark-control-plane] Marking the node default-k8s-different-port-20220921221118-10174 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0921 22:11:38.328855  251080 kubeadm.go:317] [bootstrap-token] Using token: f60jp5.opo6lrzt47sur902
	I0921 22:11:38.330272  251080 out.go:204]   - Configuring RBAC rules ...
	I0921 22:11:38.330460  251080 kubeadm.go:317] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0921 22:11:38.335703  251080 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0921 22:11:38.340513  251080 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0921 22:11:38.342637  251080 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0921 22:11:38.344542  251080 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0921 22:11:38.346406  251080 kubeadm.go:317] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0921 22:11:38.353833  251080 kubeadm.go:317] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0921 22:11:38.556116  251080 kubeadm.go:317] [addons] Applied essential addon: CoreDNS
	I0921 22:11:38.780075  251080 kubeadm.go:317] [addons] Applied essential addon: kube-proxy
	I0921 22:11:38.781317  251080 kubeadm.go:317] 
	I0921 22:11:38.781428  251080 kubeadm.go:317] Your Kubernetes control-plane has initialized successfully!
	I0921 22:11:38.781465  251080 kubeadm.go:317] 
	I0921 22:11:38.781595  251080 kubeadm.go:317] To start using your cluster, you need to run the following as a regular user:
	I0921 22:11:38.781624  251080 kubeadm.go:317] 
	I0921 22:11:38.781667  251080 kubeadm.go:317]   mkdir -p $HOME/.kube
	I0921 22:11:38.781749  251080 kubeadm.go:317]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0921 22:11:38.781810  251080 kubeadm.go:317]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0921 22:11:38.781842  251080 kubeadm.go:317] 
	I0921 22:11:38.781971  251080 kubeadm.go:317] Alternatively, if you are the root user, you can run:
	I0921 22:11:38.781987  251080 kubeadm.go:317] 
	I0921 22:11:38.782044  251080 kubeadm.go:317]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0921 22:11:38.782061  251080 kubeadm.go:317] 
	I0921 22:11:38.782142  251080 kubeadm.go:317] You should now deploy a pod network to the cluster.
	I0921 22:11:38.782239  251080 kubeadm.go:317] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0921 22:11:38.782336  251080 kubeadm.go:317]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0921 22:11:38.782349  251080 kubeadm.go:317] 
	I0921 22:11:38.782445  251080 kubeadm.go:317] You can now join any number of control-plane nodes by copying certificate authorities
	I0921 22:11:38.782532  251080 kubeadm.go:317] and service account keys on each node and then running the following as root:
	I0921 22:11:38.782539  251080 kubeadm.go:317] 
	I0921 22:11:38.782640  251080 kubeadm.go:317]   kubeadm join control-plane.minikube.internal:8444 --token f60jp5.opo6lrzt47sur902 \
	I0921 22:11:38.782760  251080 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:b419e756666ce2e30bdac31039f21ad7242d26e2b9c03a55c5bc6fe8dcfddcf7 \
	I0921 22:11:38.782786  251080 kubeadm.go:317] 	--control-plane 
	I0921 22:11:38.782792  251080 kubeadm.go:317] 
	I0921 22:11:38.782886  251080 kubeadm.go:317] Then you can join any number of worker nodes by running the following on each as root:
	I0921 22:11:38.782893  251080 kubeadm.go:317] 
	I0921 22:11:38.782985  251080 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8444 --token f60jp5.opo6lrzt47sur902 \
	I0921 22:11:38.783105  251080 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:b419e756666ce2e30bdac31039f21ad7242d26e2b9c03a55c5bc6fe8dcfddcf7 
	I0921 22:11:38.785995  251080 kubeadm.go:317] W0921 22:11:29.273642     735 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0921 22:11:38.786254  251080 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1017-gcp\n", err: exit status 1
	I0921 22:11:38.786399  251080 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0921 22:11:38.786445  251080 cni.go:95] Creating CNI manager for ""
	I0921 22:11:38.786461  251080 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0921 22:11:38.788308  251080 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0921 22:11:37.030744  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:11:39.030828  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:11:39.337094  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:11:41.836184  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:11:38.789713  251080 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0921 22:11:38.793640  251080 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.2/kubectl ...
	I0921 22:11:38.793660  251080 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0921 22:11:38.808403  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0921 22:11:39.596042  251080 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0921 22:11:39.596097  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:39.596114  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl label nodes minikube.k8s.io/version=v1.27.0 minikube.k8s.io/commit=937c68716dfaac5b5ffa3b6655158d5d3472b8c4 minikube.k8s.io/name=default-k8s-different-port-20220921221118-10174 minikube.k8s.io/updated_at=2022_09_21T22_11_39_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:39.690430  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:39.696472  251080 ops.go:34] apiserver oom_adj: -16
	I0921 22:11:40.252956  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:40.753124  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:41.252898  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:41.752958  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:42.252749  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:42.752944  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:41.530878  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:11:44.030810  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:11:43.836287  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:11:45.837347  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:11:43.252940  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:43.752934  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:44.252898  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:44.752478  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:45.252903  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:45.752467  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:46.253256  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:46.752683  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:47.252892  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:47.752682  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:46.530737  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:11:49.030362  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:11:48.335973  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:11:50.336273  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:11:48.252790  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:48.752428  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:49.252346  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:49.753263  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:50.252919  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:50.752432  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:51.252537  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:51.752927  251080 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:11:51.892900  251080 kubeadm.go:1067] duration metric: took 12.296861621s to wait for elevateKubeSystemPrivileges.
	I0921 22:11:51.892930  251080 kubeadm.go:398] StartCluster complete in 22.696819381s
	I0921 22:11:51.892946  251080 settings.go:142] acquiring lock: {Name:mk2e017b0c75e33bad2b3a546d00214b38a21694 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:11:51.893033  251080 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
	I0921 22:11:51.894853  251080 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig: {Name:mkdec5faf1bb182b50a3cd5458b352f38075dc15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:11:52.410836  251080 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20220921221118-10174" rescaled to 1
	I0921 22:11:52.410900  251080 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0921 22:11:52.412753  251080 out.go:177] * Verifying Kubernetes components...
	I0921 22:11:52.410955  251080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0921 22:11:52.410996  251080 addons.go:412] enableAddons start: toEnable=map[], additional=[]
	I0921 22:11:52.411177  251080 config.go:180] Loaded profile config "default-k8s-different-port-20220921221118-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.2
	I0921 22:11:52.414055  251080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0921 22:11:52.414125  251080 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-different-port-20220921221118-10174"
	I0921 22:11:52.414149  251080 addons.go:153] Setting addon storage-provisioner=true in "default-k8s-different-port-20220921221118-10174"
	W0921 22:11:52.414157  251080 addons.go:162] addon storage-provisioner should already be in state true
	I0921 22:11:52.414160  251080 addons.go:65] Setting default-storageclass=true in profile "default-k8s-different-port-20220921221118-10174"
	I0921 22:11:52.414177  251080 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20220921221118-10174"
	I0921 22:11:52.414210  251080 host.go:66] Checking if "default-k8s-different-port-20220921221118-10174" exists ...
	I0921 22:11:52.414507  251080 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221118-10174 --format={{.State.Status}}
	I0921 22:11:52.414719  251080 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221118-10174 --format={{.State.Status}}
	I0921 22:11:52.453309  251080 addons.go:153] Setting addon default-storageclass=true in "default-k8s-different-port-20220921221118-10174"
	W0921 22:11:52.453343  251080 addons.go:162] addon default-storageclass should already be in state true
	I0921 22:11:52.453370  251080 host.go:66] Checking if "default-k8s-different-port-20220921221118-10174" exists ...
	I0921 22:11:52.456214  251080 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0921 22:11:52.453863  251080 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221118-10174 --format={{.State.Status}}
	I0921 22:11:52.457793  251080 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0921 22:11:52.457817  251080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0921 22:11:52.457870  251080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:11:52.489924  251080 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0921 22:11:52.489952  251080 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0921 22:11:52.490001  251080 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:11:52.499827  251080 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49418 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa Username:docker}
	I0921 22:11:52.521036  251080 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0921 22:11:52.523074  251080 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20220921221118-10174" to be "Ready" ...
	I0921 22:11:52.524139  251080 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49418 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa Username:docker}
	I0921 22:11:52.694618  251080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0921 22:11:52.698974  251080 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0921 22:11:53.100405  251080 start.go:810] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS
	I0921 22:11:53.285087  251080 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0921 22:11:51.030761  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:11:53.030886  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:11:55.031051  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:11:52.337131  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:11:54.836927  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:11:53.286473  251080 addons.go:414] enableAddons completed in 875.500055ms
	I0921 22:11:54.531242  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:11:56.531286  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:11:57.531654  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:12:00.031155  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:11:57.336103  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:11:59.336841  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:01.836213  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:11:59.030486  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:01.030832  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:03.031401  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:02.530785  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:12:05.030896  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:12:03.836938  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:06.336349  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:05.530847  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:08.030644  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:07.031537  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:12:09.530730  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:12:08.837257  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:11.336377  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:10.031510  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:12.531037  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:11.531989  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:12:14.030729  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:12:13.837027  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:16.336212  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:14.531388  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:17.030653  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:16.031195  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:12:18.530817  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:12:18.837013  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:21.336931  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:19.531491  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:22.030834  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:20.531145  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:12:23.030753  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:12:25.033218  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:12:23.836124  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:25.836792  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:24.530794  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:27.030911  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:27.530979  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:12:30.030328  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:12:28.337008  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:30.836665  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:29.031263  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:31.531092  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:32.031104  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:12:34.530719  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:12:32.836819  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:35.336220  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:34.030989  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:36.530772  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:37.031009  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:12:39.530361  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:12:37.336820  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:39.837041  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:39.030837  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:41.030918  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:43.031395  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:41.530781  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:12:43.531407  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:12:42.336354  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:44.836264  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:46.836827  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:45.530777  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:47.531054  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:46.030030  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:12:48.030327  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:12:50.030839  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:12:49.336608  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:51.336789  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:49.531276  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:51.531467  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:52.031223  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:12:54.032232  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:12:53.836292  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:55.836687  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:54.030859  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:56.530994  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:12:56.531050  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:12:59.030372  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:12:57.836753  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:59.836812  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:12:59.030908  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:13:01.031504  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:13:01.031167  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:13:03.531055  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:13:02.336234  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:13:04.337102  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:13:06.836799  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:13:03.531248  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:13:06.031222  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:13:06.030340  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:13:08.030411  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:13:10.031005  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:13:08.836960  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:13:11.336661  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:13:08.530717  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:13:10.531407  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:13:13.030797  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:13:13.338612  242109 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:13:13.338639  242109 node_ready.go:38] duration metric: took 4m0.008551222s waiting for node "no-preload-20220921220832-10174" to be "Ready" ...
	I0921 22:13:13.340854  242109 out.go:177] 
	W0921 22:13:13.342210  242109 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0921 22:13:13.342226  242109 out.go:239] * 
	W0921 22:13:13.342954  242109 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0921 22:13:13.344170  242109 out.go:177] 
	I0921 22:13:12.530866  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:13:15.030413  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:13:15.031195  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:13:17.531223  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:13:17.031193  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:13:19.530475  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:13:20.030902  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:13:22.531093  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:13:21.530536  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:13:23.531098  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:13:25.030827  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:13:27.031201  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:13:25.531210  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:13:28.030151  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:13:30.030808  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:13:29.530660  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:13:32.030955  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:13:32.530349  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:13:34.530517  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:13:34.031037  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:13:36.031513  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:13:37.031085  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:13:39.031177  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:13:38.531161  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:13:41.030620  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:13:43.031380  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:13:41.531099  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:13:44.030710  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:13:45.530783  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:13:48.030568  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:13:46.031382  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:13:48.531065  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:13:50.031250  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:13:52.531321  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:13:51.031119  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:13:53.529989  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:13:55.031106  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:13:57.530922  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:13:55.530775  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:13:58.030846  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:14:00.030925  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:14:02.531359  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:14:00.530982  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:14:03.030176  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:14:05.030812  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:14:05.030993  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:14:07.530797  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:14:07.530511  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:14:10.030457  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:14:09.530913  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:14:11.531450  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:14:12.031016  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:14:14.031227  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:14:14.030764  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:14:16.031283  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:14:18.031326  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:14:16.531495  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:14:19.030297  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:14:20.530765  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:14:23.031166  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:14:21.030808  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:14:23.530011  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:14:25.530744  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:14:27.531146  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:14:25.530854  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:14:28.030889  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:14:30.030823  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:14:32.031295  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:14:30.530876  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:14:33.030439  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:14:34.531158  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:14:37.031127  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:14:35.530538  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:14:37.530615  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:14:39.531497  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:14:39.531189  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:14:42.031109  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:14:42.030564  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:14:44.031074  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:14:44.531144  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:14:47.031215  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:14:46.531696  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:14:49.030430  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:14:49.531212  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:14:52.031591  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:14:51.030872  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:14:53.031128  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:14:54.530764  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:14:56.531628  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:14:55.531381  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:14:58.030638  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:14:59.031443  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:15:01.530765  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:15:00.530527  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:15:02.530909  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:15:05.031173  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:15:03.531199  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:15:06.031212  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:15:07.530981  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:15:09.531156  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:15:08.531535  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:15:11.030786  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:15:13.031313  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:15:12.031090  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:15:14.031428  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:15:15.531810  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:15:18.031388  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:15:16.530925  247121 pod_ready.go:102] pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace has status "Ready":"False"
	I0921 22:15:19.025682  247121 pod_ready.go:81] duration metric: took 4m0.400563713s waiting for pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace to be "Ready" ...
	E0921 22:15:19.025707  247121 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-7958775c-n6rqq" in "kube-system" namespace to be "Ready" (will not retry!)
	I0921 22:15:19.025727  247121 pod_ready.go:38] duration metric: took 4m1.599877119s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0921 22:15:19.025750  247121 kubeadm.go:631] restartCluster took 5m11.841964255s
	W0921 22:15:19.026022  247121 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0921 22:15:19.026073  247121 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0921 22:15:21.378094  247121 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (2.351995179s)
	I0921 22:15:21.378181  247121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0921 22:15:21.388550  247121 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0921 22:15:21.396088  247121 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0921 22:15:21.396145  247121 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0921 22:15:21.402886  247121 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0921 22:15:21.402927  247121 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0921 22:15:21.449138  247121 kubeadm.go:317] [init] Using Kubernetes version: v1.16.0
	I0921 22:15:21.449228  247121 kubeadm.go:317] [preflight] Running pre-flight checks
	I0921 22:15:21.477487  247121 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I0921 22:15:21.477569  247121 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1017-gcp
	I0921 22:15:21.477618  247121 kubeadm.go:317] OS: Linux
	I0921 22:15:21.477661  247121 kubeadm.go:317] CGROUPS_CPU: enabled
	I0921 22:15:21.477710  247121 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I0921 22:15:21.477751  247121 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I0921 22:15:21.477792  247121 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I0921 22:15:21.477837  247121 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I0921 22:15:21.477880  247121 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I0921 22:15:21.549871  247121 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0921 22:15:21.550044  247121 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0921 22:15:21.550184  247121 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0921 22:15:21.684151  247121 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0921 22:15:21.686278  247121 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0921 22:15:21.693456  247121 kubeadm.go:317] [kubelet-start] Activating the kubelet service
	I0921 22:15:21.766666  247121 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0921 22:15:21.773015  247121 out.go:204]   - Generating certificates and keys ...
	I0921 22:15:21.773194  247121 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0921 22:15:21.773288  247121 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0921 22:15:21.773394  247121 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0921 22:15:21.773481  247121 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I0921 22:15:21.773609  247121 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I0921 22:15:21.773694  247121 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I0921 22:15:21.773794  247121 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I0921 22:15:21.773873  247121 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I0921 22:15:21.773986  247121 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0921 22:15:21.774097  247121 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0921 22:15:21.774176  247121 kubeadm.go:317] [certs] Using the existing "sa" key
	I0921 22:15:21.774255  247121 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0921 22:15:22.127958  247121 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0921 22:15:22.390000  247121 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0921 22:15:22.602949  247121 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0921 22:15:22.872836  247121 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0921 22:15:22.874115  247121 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0921 22:15:20.531309  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:15:23.030497  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:15:22.876290  247121 out.go:204]   - Booting up control plane ...
	I0921 22:15:22.876378  247121 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0921 22:15:22.882073  247121 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0921 22:15:22.883893  247121 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0921 22:15:22.884961  247121 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0921 22:15:22.887367  247121 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0921 22:15:25.031107  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:15:27.531379  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:15:31.389627  247121 kubeadm.go:317] [apiclient] All control plane components are healthy after 8.502254 seconds
	I0921 22:15:31.389810  247121 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0921 22:15:31.400525  247121 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I0921 22:15:31.915530  247121 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs
	I0921 22:15:31.915694  247121 kubeadm.go:317] [mark-control-plane] Marking the node old-k8s-version-20220921220722-10174 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0921 22:15:32.422920  247121 kubeadm.go:317] [bootstrap-token] Using token: 11qd7w.gdk44a66vaieoafi
	I0921 22:15:32.424367  247121 out.go:204]   - Configuring RBAC rules ...
	I0921 22:15:32.424501  247121 kubeadm.go:317] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0921 22:15:32.428711  247121 kubeadm.go:317] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0921 22:15:32.431641  247121 kubeadm.go:317] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0921 22:15:32.433601  247121 kubeadm.go:317] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0921 22:15:32.435558  247121 kubeadm.go:317] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0921 22:15:32.480798  247121 kubeadm.go:317] [addons] Applied essential addon: CoreDNS
	I0921 22:15:32.836938  247121 kubeadm.go:317] [addons] Applied essential addon: kube-proxy
	I0921 22:15:32.838227  247121 kubeadm.go:317] 
	I0921 22:15:32.838305  247121 kubeadm.go:317] Your Kubernetes control-plane has initialized successfully!
	I0921 22:15:32.838317  247121 kubeadm.go:317] 
	I0921 22:15:32.838409  247121 kubeadm.go:317] To start using your cluster, you need to run the following as a regular user:
	I0921 22:15:32.838419  247121 kubeadm.go:317] 
	I0921 22:15:32.838450  247121 kubeadm.go:317]   mkdir -p $HOME/.kube
	I0921 22:15:32.838553  247121 kubeadm.go:317]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0921 22:15:32.838638  247121 kubeadm.go:317]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0921 22:15:32.838650  247121 kubeadm.go:317] 
	I0921 22:15:32.838727  247121 kubeadm.go:317] You should now deploy a pod network to the cluster.
	I0921 22:15:32.838800  247121 kubeadm.go:317] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0921 22:15:32.838907  247121 kubeadm.go:317]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0921 22:15:32.838918  247121 kubeadm.go:317] 
	I0921 22:15:32.839009  247121 kubeadm.go:317] You can now join any number of control-plane nodes by copying certificate authorities 
	I0921 22:15:32.839087  247121 kubeadm.go:317] and service account keys on each node and then running the following as root:
	I0921 22:15:32.839100  247121 kubeadm.go:317] 
	I0921 22:15:32.839166  247121 kubeadm.go:317]   kubeadm join control-plane.minikube.internal:8443 --token 11qd7w.gdk44a66vaieoafi \
	I0921 22:15:32.839252  247121 kubeadm.go:317]     --discovery-token-ca-cert-hash sha256:b419e756666ce2e30bdac31039f21ad7242d26e2b9c03a55c5bc6fe8dcfddcf7 \
	I0921 22:15:32.839298  247121 kubeadm.go:317]     --control-plane 	  
	I0921 22:15:32.839310  247121 kubeadm.go:317] 
	I0921 22:15:32.839399  247121 kubeadm.go:317] Then you can join any number of worker nodes by running the following on each as root:
	I0921 22:15:32.839414  247121 kubeadm.go:317] 
	I0921 22:15:32.839511  247121 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8443 --token 11qd7w.gdk44a66vaieoafi \
	I0921 22:15:32.839602  247121 kubeadm.go:317]     --discovery-token-ca-cert-hash sha256:b419e756666ce2e30bdac31039f21ad7242d26e2b9c03a55c5bc6fe8dcfddcf7 
	I0921 22:15:32.841219  247121 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1017-gcp\n", err: exit status 1
	I0921 22:15:32.841316  247121 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0921 22:15:32.841350  247121 cni.go:95] Creating CNI manager for ""
	I0921 22:15:32.841362  247121 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0921 22:15:32.843196  247121 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0921 22:15:30.030428  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:15:32.031245  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:15:32.844505  247121 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0921 22:15:32.848119  247121 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.16.0/kubectl ...
	I0921 22:15:32.848139  247121 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0921 22:15:32.861406  247121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0921 22:15:33.081539  247121 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0921 22:15:33.081628  247121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:15:33.081638  247121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.27.0 minikube.k8s.io/commit=937c68716dfaac5b5ffa3b6655158d5d3472b8c4 minikube.k8s.io/name=old-k8s-version-20220921220722-10174 minikube.k8s.io/updated_at=2022_09_21T22_15_33_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:15:33.202299  247121 ops.go:34] apiserver oom_adj: -16
	I0921 22:15:33.202435  247121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:15:33.787711  247121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:15:34.287913  247121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:15:34.787241  247121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:15:35.287775  247121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:15:34.531568  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:15:36.531614  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:15:35.786968  247121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:15:36.287553  247121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:15:36.787393  247121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:15:37.287388  247121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:15:37.787889  247121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:15:38.287160  247121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:15:38.787974  247121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:15:39.287230  247121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:15:39.787669  247121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:15:40.287712  247121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:15:39.031649  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:15:41.531271  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:15:40.787627  247121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:15:41.287056  247121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:15:41.787448  247121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:15:42.287147  247121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:15:42.788033  247121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:15:43.287821  247121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:15:43.787052  247121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:15:44.287156  247121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:15:44.787441  247121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:15:45.287162  247121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:15:44.031379  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:15:46.530779  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:15:45.787785  247121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:15:46.287589  247121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:15:46.787592  247121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:15:47.287976  247121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:15:47.787787  247121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:15:48.287665  247121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:15:48.787366  247121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:15:48.857183  247121 kubeadm.go:1067] duration metric: took 15.775622095s to wait for elevateKubeSystemPrivileges.
	I0921 22:15:48.857231  247121 kubeadm.go:398] StartCluster complete in 5m41.717623944s
	I0921 22:15:48.857253  247121 settings.go:142] acquiring lock: {Name:mk2e017b0c75e33bad2b3a546d00214b38a21694 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:15:48.857430  247121 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
	I0921 22:15:48.859451  247121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig: {Name:mkdec5faf1bb182b50a3cd5458b352f38075dc15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:15:49.387181  247121 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "old-k8s-version-20220921220722-10174" rescaled to 1
	I0921 22:15:49.387247  247121 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0921 22:15:49.389908  247121 out.go:177] * Verifying Kubernetes components...
	I0921 22:15:49.387285  247121 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0921 22:15:49.387336  247121 addons.go:412] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0921 22:15:49.387501  247121 config.go:180] Loaded profile config "old-k8s-version-20220921220722-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0921 22:15:49.391233  247121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0921 22:15:49.391285  247121 addons.go:65] Setting storage-provisioner=true in profile "old-k8s-version-20220921220722-10174"
	I0921 22:15:49.391318  247121 addons.go:153] Setting addon storage-provisioner=true in "old-k8s-version-20220921220722-10174"
	W0921 22:15:49.391331  247121 addons.go:162] addon storage-provisioner should already be in state true
	I0921 22:15:49.391335  247121 addons.go:65] Setting default-storageclass=true in profile "old-k8s-version-20220921220722-10174"
	I0921 22:15:49.391353  247121 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-20220921220722-10174"
	I0921 22:15:49.391362  247121 addons.go:65] Setting metrics-server=true in profile "old-k8s-version-20220921220722-10174"
	I0921 22:15:49.391374  247121 addons.go:65] Setting dashboard=true in profile "old-k8s-version-20220921220722-10174"
	I0921 22:15:49.391391  247121 addons.go:153] Setting addon metrics-server=true in "old-k8s-version-20220921220722-10174"
	I0921 22:15:49.391402  247121 addons.go:153] Setting addon dashboard=true in "old-k8s-version-20220921220722-10174"
	W0921 22:15:49.391409  247121 addons.go:162] addon metrics-server should already be in state true
	I0921 22:15:49.391387  247121 host.go:66] Checking if "old-k8s-version-20220921220722-10174" exists ...
	I0921 22:15:49.391459  247121 host.go:66] Checking if "old-k8s-version-20220921220722-10174" exists ...
	W0921 22:15:49.391411  247121 addons.go:162] addon dashboard should already be in state true
	I0921 22:15:49.391517  247121 host.go:66] Checking if "old-k8s-version-20220921220722-10174" exists ...
	I0921 22:15:49.391742  247121 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220921220722-10174 --format={{.State.Status}}
	I0921 22:15:49.391925  247121 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220921220722-10174 --format={{.State.Status}}
	I0921 22:15:49.391969  247121 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220921220722-10174 --format={{.State.Status}}
	I0921 22:15:49.391979  247121 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220921220722-10174 --format={{.State.Status}}
	I0921 22:15:49.429024  247121 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0921 22:15:49.430843  247121 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0921 22:15:49.430870  247121 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0921 22:15:49.430934  247121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220722-10174
	I0921 22:15:49.432886  247121 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0921 22:15:49.432341  247121 addons.go:153] Setting addon default-storageclass=true in "old-k8s-version-20220921220722-10174"
	W0921 22:15:49.435066  247121 addons.go:162] addon default-storageclass should already be in state true
	I0921 22:15:49.435100  247121 host.go:66] Checking if "old-k8s-version-20220921220722-10174" exists ...
	I0921 22:15:49.435170  247121 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0921 22:15:49.435198  247121 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0921 22:15:49.435253  247121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220722-10174
	I0921 22:15:49.435329  247121 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.6.0
	I0921 22:15:49.435643  247121 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220921220722-10174 --format={{.State.Status}}
	I0921 22:15:49.438643  247121 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0921 22:15:49.440303  247121 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0921 22:15:49.440329  247121 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0921 22:15:49.440382  247121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220722-10174
	I0921 22:15:49.475535  247121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49413 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/old-k8s-version-20220921220722-10174/id_rsa Username:docker}
	I0921 22:15:49.476717  247121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49413 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/old-k8s-version-20220921220722-10174/id_rsa Username:docker}
	I0921 22:15:49.477035  247121 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0921 22:15:49.477179  247121 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0921 22:15:49.477273  247121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220921220722-10174
	I0921 22:15:49.487555  247121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49413 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/old-k8s-version-20220921220722-10174/id_rsa Username:docker}
	I0921 22:15:49.510365  247121 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-20220921220722-10174" to be "Ready" ...
	I0921 22:15:49.510586  247121 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0921 22:15:49.512994  247121 node_ready.go:49] node "old-k8s-version-20220921220722-10174" has status "Ready":"True"
	I0921 22:15:49.513015  247121 node_ready.go:38] duration metric: took 2.614704ms waiting for node "old-k8s-version-20220921220722-10174" to be "Ready" ...
	I0921 22:15:49.513026  247121 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0921 22:15:49.517116  247121 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-2mph9" in "kube-system" namespace to be "Ready" ...
	I0921 22:15:49.523194  247121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49413 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/old-k8s-version-20220921220722-10174/id_rsa Username:docker}
	I0921 22:15:49.696090  247121 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0921 22:15:49.696275  247121 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0921 22:15:49.696299  247121 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0921 22:15:49.699280  247121 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0921 22:15:49.699348  247121 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0921 22:15:49.876210  247121 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0921 22:15:49.876248  247121 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0921 22:15:49.876623  247121 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0921 22:15:49.876645  247121 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0921 22:15:49.878450  247121 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0921 22:15:49.898514  247121 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0921 22:15:49.898544  247121 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0921 22:15:49.981838  247121 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0921 22:15:49.981884  247121 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0921 22:15:50.077044  247121 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0921 22:15:50.077071  247121 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4206 bytes)
	I0921 22:15:50.080895  247121 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0921 22:15:50.190260  247121 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0921 22:15:50.190292  247121 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0921 22:15:50.276211  247121 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0921 22:15:50.276296  247121 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0921 22:15:50.376927  247121 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0921 22:15:50.376971  247121 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0921 22:15:50.397367  247121 start.go:810] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS
	I0921 22:15:50.477570  247121 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0921 22:15:50.477605  247121 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0921 22:15:50.502035  247121 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0921 22:15:50.502070  247121 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0921 22:15:50.600971  247121 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0921 22:15:50.786449  247121 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.090310728s)
	I0921 22:15:51.185446  247121 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.104502698s)
	I0921 22:15:51.185485  247121 addons.go:383] Verifying addon metrics-server=true in "old-k8s-version-20220921220722-10174"
	I0921 22:15:51.591577  247121 pod_ready.go:102] pod "coredns-5644d7b6d9-2mph9" in "kube-system" namespace has status "Ready":"False"
	I0921 22:15:51.793342  247121 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.192307416s)
	I0921 22:15:51.795115  247121 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0921 22:15:48.531527  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:15:51.031482  251080 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:15:52.532857  251080 node_ready.go:38] duration metric: took 4m0.009753586s waiting for node "default-k8s-different-port-20220921221118-10174" to be "Ready" ...
	I0921 22:15:52.535705  251080 out.go:177] 
	W0921 22:15:52.537201  251080 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0921 22:15:52.537217  251080 out.go:239] * 
	W0921 22:15:52.537962  251080 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0921 22:15:52.539605  251080 out.go:177] 
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	0cd296271116f       d921cee849482       About a minute ago   Running             kindnet-cni               1                   08adfd3bc0694
	2d8abb3e47710       d921cee849482       4 minutes ago        Exited              kindnet-cni               0                   08adfd3bc0694
	e1b3d54125fe2       1c7d8c51823b5       4 minutes ago        Running             kube-proxy                0                   c4181b02eb4c6
	2654f64b12dee       a8a176a5d5d69       4 minutes ago        Running             etcd                      0                   6d751c20960c7
	1bb50adc50b7e       97801f8394908       4 minutes ago        Running             kube-apiserver            0                   bb52b5028f7b8
	9e931b83ea689       dbfceb93c69b6       4 minutes ago        Running             kube-controller-manager   0                   e0dba18ec8f49
	8a3d458869b09       ca0ea1ee3cfd3       4 minutes ago        Running             kube-scheduler            0                   8550ad3b2adb2
	
	* 
	* ==> containerd <==
	* -- Logs begin at Wed 2022-09-21 22:11:26 UTC, end at Wed 2022-09-21 22:15:53 UTC. --
	Sep 21 22:11:52 default-k8s-different-port-20220921221118-10174 containerd[513]: time="2022-09-21T22:11:52.133746854Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 21 22:11:52 default-k8s-different-port-20220921221118-10174 containerd[513]: time="2022-09-21T22:11:52.133770486Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 21 22:11:52 default-k8s-different-port-20220921221118-10174 containerd[513]: time="2022-09-21T22:11:52.133962009Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c4181b02eb4c640f17ce160397dd101c3fc8e6ea90c7be5ac4053ddf06a71a66 pid=1688 runtime=io.containerd.runc.v2
	Sep 21 22:11:52 default-k8s-different-port-20220921221118-10174 containerd[513]: time="2022-09-21T22:11:52.136312130Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 21 22:11:52 default-k8s-different-port-20220921221118-10174 containerd[513]: time="2022-09-21T22:11:52.136419708Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 21 22:11:52 default-k8s-different-port-20220921221118-10174 containerd[513]: time="2022-09-21T22:11:52.136435704Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 21 22:11:52 default-k8s-different-port-20220921221118-10174 containerd[513]: time="2022-09-21T22:11:52.136678182Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/08adfd3bc069482c3e3dfc7c5de31fc00c286cec1e929efac6d4107811253a70 pid=1699 runtime=io.containerd.runc.v2
	Sep 21 22:11:52 default-k8s-different-port-20220921221118-10174 containerd[513]: time="2022-09-21T22:11:52.195266888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lzphc,Uid:611dbd37-0771-41b2-b886-93f46d79f802,Namespace:kube-system,Attempt:0,} returns sandbox id \"c4181b02eb4c640f17ce160397dd101c3fc8e6ea90c7be5ac4053ddf06a71a66\""
	Sep 21 22:11:52 default-k8s-different-port-20220921221118-10174 containerd[513]: time="2022-09-21T22:11:52.198084515Z" level=info msg="CreateContainer within sandbox \"c4181b02eb4c640f17ce160397dd101c3fc8e6ea90c7be5ac4053ddf06a71a66\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
	Sep 21 22:11:52 default-k8s-different-port-20220921221118-10174 containerd[513]: time="2022-09-21T22:11:52.212991691Z" level=info msg="CreateContainer within sandbox \"c4181b02eb4c640f17ce160397dd101c3fc8e6ea90c7be5ac4053ddf06a71a66\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e1b3d54125fe2aff15e0a996ae0be62597db4065ff9a9d41a3b0e598b97b1608\""
	Sep 21 22:11:52 default-k8s-different-port-20220921221118-10174 containerd[513]: time="2022-09-21T22:11:52.213533580Z" level=info msg="StartContainer for \"e1b3d54125fe2aff15e0a996ae0be62597db4065ff9a9d41a3b0e598b97b1608\""
	Sep 21 22:11:52 default-k8s-different-port-20220921221118-10174 containerd[513]: time="2022-09-21T22:11:52.285942380Z" level=info msg="StartContainer for \"e1b3d54125fe2aff15e0a996ae0be62597db4065ff9a9d41a3b0e598b97b1608\" returns successfully"
	Sep 21 22:11:52 default-k8s-different-port-20220921221118-10174 containerd[513]: time="2022-09-21T22:11:52.393031012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-7wbpp,Uid:3f16ae0b-2f66-4f1e-b234-74570472a7f8,Namespace:kube-system,Attempt:0,} returns sandbox id \"08adfd3bc069482c3e3dfc7c5de31fc00c286cec1e929efac6d4107811253a70\""
	Sep 21 22:11:52 default-k8s-different-port-20220921221118-10174 containerd[513]: time="2022-09-21T22:11:52.395648540Z" level=info msg="CreateContainer within sandbox \"08adfd3bc069482c3e3dfc7c5de31fc00c286cec1e929efac6d4107811253a70\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:0,}"
	Sep 21 22:11:52 default-k8s-different-port-20220921221118-10174 containerd[513]: time="2022-09-21T22:11:52.409173906Z" level=info msg="CreateContainer within sandbox \"08adfd3bc069482c3e3dfc7c5de31fc00c286cec1e929efac6d4107811253a70\" for &ContainerMetadata{Name:kindnet-cni,Attempt:0,} returns container id \"2d8abb3e4771063680511a2a2049ed0f4b2e8bae9c8c5229fd30401358a46f3a\""
	Sep 21 22:11:52 default-k8s-different-port-20220921221118-10174 containerd[513]: time="2022-09-21T22:11:52.409706222Z" level=info msg="StartContainer for \"2d8abb3e4771063680511a2a2049ed0f4b2e8bae9c8c5229fd30401358a46f3a\""
	Sep 21 22:11:52 default-k8s-different-port-20220921221118-10174 containerd[513]: time="2022-09-21T22:11:52.681239084Z" level=info msg="StartContainer for \"2d8abb3e4771063680511a2a2049ed0f4b2e8bae9c8c5229fd30401358a46f3a\" returns successfully"
	Sep 21 22:14:33 default-k8s-different-port-20220921221118-10174 containerd[513]: time="2022-09-21T22:14:33.223213615Z" level=info msg="shim disconnected" id=2d8abb3e4771063680511a2a2049ed0f4b2e8bae9c8c5229fd30401358a46f3a
	Sep 21 22:14:33 default-k8s-different-port-20220921221118-10174 containerd[513]: time="2022-09-21T22:14:33.223268787Z" level=warning msg="cleaning up after shim disconnected" id=2d8abb3e4771063680511a2a2049ed0f4b2e8bae9c8c5229fd30401358a46f3a namespace=k8s.io
	Sep 21 22:14:33 default-k8s-different-port-20220921221118-10174 containerd[513]: time="2022-09-21T22:14:33.223288856Z" level=info msg="cleaning up dead shim"
	Sep 21 22:14:33 default-k8s-different-port-20220921221118-10174 containerd[513]: time="2022-09-21T22:14:33.233050547Z" level=warning msg="cleanup warnings time=\"2022-09-21T22:14:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2118 runtime=io.containerd.runc.v2\n"
	Sep 21 22:14:34 default-k8s-different-port-20220921221118-10174 containerd[513]: time="2022-09-21T22:14:34.070539111Z" level=info msg="CreateContainer within sandbox \"08adfd3bc069482c3e3dfc7c5de31fc00c286cec1e929efac6d4107811253a70\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:1,}"
	Sep 21 22:14:34 default-k8s-different-port-20220921221118-10174 containerd[513]: time="2022-09-21T22:14:34.086048547Z" level=info msg="CreateContainer within sandbox \"08adfd3bc069482c3e3dfc7c5de31fc00c286cec1e929efac6d4107811253a70\" for &ContainerMetadata{Name:kindnet-cni,Attempt:1,} returns container id \"0cd296271116fe0d43cabed84c5faf298cf1d9b7162daeb2796de31c9e80995d\""
	Sep 21 22:14:34 default-k8s-different-port-20220921221118-10174 containerd[513]: time="2022-09-21T22:14:34.086658632Z" level=info msg="StartContainer for \"0cd296271116fe0d43cabed84c5faf298cf1d9b7162daeb2796de31c9e80995d\""
	Sep 21 22:14:34 default-k8s-different-port-20220921221118-10174 containerd[513]: time="2022-09-21T22:14:34.279838401Z" level=info msg="StartContainer for \"0cd296271116fe0d43cabed84c5faf298cf1d9b7162daeb2796de31c9e80995d\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-different-port-20220921221118-10174
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-different-port-20220921221118-10174
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=937c68716dfaac5b5ffa3b6655158d5d3472b8c4
	                    minikube.k8s.io/name=default-k8s-different-port-20220921221118-10174
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_09_21T22_11_39_0700
	                    minikube.k8s.io/version=v1.27.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 21 Sep 2022 22:11:35 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-different-port-20220921221118-10174
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 21 Sep 2022 22:15:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 21 Sep 2022 22:11:48 +0000   Wed, 21 Sep 2022 22:11:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 21 Sep 2022 22:11:48 +0000   Wed, 21 Sep 2022 22:11:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 21 Sep 2022 22:11:48 +0000   Wed, 21 Sep 2022 22:11:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 21 Sep 2022 22:11:48 +0000   Wed, 21 Sep 2022 22:11:33 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-different-port-20220921221118-10174
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871748Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871748Ki
	  pods:               110
	System Info:
	  Machine ID:                 40026c506a9948afaae3b0fda0a61c83
	  System UUID:                15db467d-fd65-4163-8719-8617da0ee9c6
	  Boot ID:                    878f3e16-9143-42f3-b848-1a740c7f26bf
	  Kernel Version:             5.15.0-1017-gcp
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.6.8
	  Kubelet Version:            v1.25.2
	  Kube-Proxy Version:         v1.25.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-default-k8s-different-port-20220921221118-10174                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         4m14s
	  kube-system                 kindnet-7wbpp                                                              100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m2s
	  kube-system                 kube-apiserver-default-k8s-different-port-20220921221118-10174             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m14s
	  kube-system                 kube-controller-manager-default-k8s-different-port-20220921221118-10174    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m14s
	  kube-system                 kube-proxy-lzphc                                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m2s
	  kube-system                 kube-scheduler-default-k8s-different-port-20220921221118-10174             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m1s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  4m22s (x5 over 4m22s)  kubelet          Node default-k8s-different-port-20220921221118-10174 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m22s (x4 over 4m22s)  kubelet          Node default-k8s-different-port-20220921221118-10174 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m22s (x4 over 4m22s)  kubelet          Node default-k8s-different-port-20220921221118-10174 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m15s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m15s                  kubelet          Node default-k8s-different-port-20220921221118-10174 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m15s                  kubelet          Node default-k8s-different-port-20220921221118-10174 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m15s                  kubelet          Node default-k8s-different-port-20220921221118-10174 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m15s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m3s                   node-controller  Node default-k8s-different-port-20220921221118-10174 event: Registered Node default-k8s-different-port-20220921221118-10174 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000005] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +2.959863] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.003881] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.023897] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[Sep21 22:10] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.005087] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[Sep21 22:11] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +2.967845] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.031851] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.027935] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +2.943864] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.019893] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.023889] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	
	* 
	* ==> etcd [2654f64b12dee801006bf0c4b82dc6839422d58571d4bc74cee43fe7fdfe9b01] <==
	* {"level":"info","ts":"2022-09-21T22:11:32.801Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2022-09-21T22:11:32.801Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2022-09-21T22:11:32.803Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-09-21T22:11:32.803Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2022-09-21T22:11:32.803Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2022-09-21T22:11:32.803Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-09-21T22:11:32.803Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-09-21T22:11:32.991Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 1"}
	{"level":"info","ts":"2022-09-21T22:11:32.991Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2022-09-21T22:11:32.991Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2022-09-21T22:11:32.991Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2022-09-21T22:11:32.991Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2022-09-21T22:11:32.991Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2022-09-21T22:11:32.991Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2022-09-21T22:11:32.991Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-21T22:11:32.992Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-21T22:11:32.992Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-21T22:11:32.992Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:default-k8s-different-port-20220921221118-10174 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2022-09-21T22:11:32.992Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-21T22:11:32.992Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-09-21T22:11:32.992Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-09-21T22:11:32.992Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-09-21T22:11:32.992Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-09-21T22:11:32.994Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-09-21T22:11:32.994Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.85.2:2379"}
	
	* 
	* ==> kernel <==
	*  22:15:53 up 58 min,  0 users,  load average: 0.67, 2.18, 2.29
	Linux default-k8s-different-port-20220921221118-10174 5.15.0-1017-gcp #23~20.04.2-Ubuntu SMP Wed Aug 17 02:46:40 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kube-apiserver [1bb50adc50b7e339b9e7fee763126f6f17a621dcc78637bab78187b50e68bbf2] <==
	* I0921 22:11:35.575804       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0921 22:11:35.575972       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0921 22:11:35.576041       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0921 22:11:35.576424       1 apf_controller.go:305] Running API Priority and Fairness config worker
	I0921 22:11:35.576630       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0921 22:11:35.576649       1 cache.go:39] Caches are synced for autoregister controller
	I0921 22:11:35.592140       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0921 22:11:35.599285       1 controller.go:616] quota admission added evaluator for: namespaces
	I0921 22:11:36.242080       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0921 22:11:36.463042       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0921 22:11:36.466360       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0921 22:11:36.466387       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0921 22:11:36.848899       1 controller.go:616] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0921 22:11:36.897461       1 controller.go:616] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0921 22:11:36.989327       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0921 22:11:36.994282       1 lease.go:250] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I0921 22:11:36.995247       1 controller.go:616] quota admission added evaluator for: endpoints
	I0921 22:11:36.999018       1 controller.go:616] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0921 22:11:37.513301       1 controller.go:616] quota admission added evaluator for: serviceaccounts
	I0921 22:11:38.547983       1 controller.go:616] quota admission added evaluator for: deployments.apps
	I0921 22:11:38.554911       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0921 22:11:38.562442       1 controller.go:616] quota admission added evaluator for: daemonsets.apps
	I0921 22:11:38.625519       1 controller.go:616] quota admission added evaluator for: leases.coordination.k8s.io
	I0921 22:11:51.719506       1 controller.go:616] quota admission added evaluator for: replicasets.apps
	I0921 22:11:51.768557       1 controller.go:616] quota admission added evaluator for: controllerrevisions.apps
	
	* 
	* ==> kube-controller-manager [9e931b83ea689a63f7a3861c1e0f7076f722e68890221c6209b05b3b852c12d7] <==
	* I0921 22:11:50.915896       1 shared_informer.go:262] Caches are synced for ephemeral
	I0921 22:11:50.915920       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0921 22:11:50.915920       1 shared_informer.go:262] Caches are synced for HPA
	I0921 22:11:50.915965       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0921 22:11:50.915969       1 shared_informer.go:262] Caches are synced for daemon sets
	I0921 22:11:50.916011       1 shared_informer.go:262] Caches are synced for deployment
	I0921 22:11:50.916180       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
	I0921 22:11:50.916336       1 shared_informer.go:262] Caches are synced for job
	I0921 22:11:50.916393       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0921 22:11:50.916669       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I0921 22:11:50.919495       1 shared_informer.go:262] Caches are synced for cronjob
	I0921 22:11:50.920490       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I0921 22:11:51.021946       1 shared_informer.go:262] Caches are synced for resource quota
	I0921 22:11:51.044665       1 shared_informer.go:262] Caches are synced for attach detach
	I0921 22:11:51.072661       1 shared_informer.go:262] Caches are synced for resource quota
	I0921 22:11:51.479876       1 shared_informer.go:262] Caches are synced for garbage collector
	I0921 22:11:51.515801       1 shared_informer.go:262] Caches are synced for garbage collector
	I0921 22:11:51.515827       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0921 22:11:51.721293       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-565d847f94 to 2"
	I0921 22:11:51.774170       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-lzphc"
	I0921 22:11:51.776223       1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-7wbpp"
	I0921 22:11:51.913709       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-565d847f94 to 1 from 2"
	I0921 22:11:51.921133       1 event.go:294] "Event occurred" object="kube-system/coredns-565d847f94" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-565d847f94-hhmh6"
	I0921 22:11:51.926325       1 event.go:294] "Event occurred" object="kube-system/coredns-565d847f94" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-565d847f94-mrkjn"
	I0921 22:11:51.984101       1 event.go:294] "Event occurred" object="kube-system/coredns-565d847f94" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-565d847f94-hhmh6"
	
	* 
	* ==> kube-proxy [e1b3d54125fe2aff15e0a996ae0be62597db4065ff9a9d41a3b0e598b97b1608] <==
	* I0921 22:11:52.320700       1 node.go:163] Successfully retrieved node IP: 192.168.85.2
	I0921 22:11:52.320772       1 server_others.go:138] "Detected node IP" address="192.168.85.2"
	I0921 22:11:52.320813       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0921 22:11:52.340612       1 server_others.go:206] "Using iptables Proxier"
	I0921 22:11:52.340647       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0921 22:11:52.340656       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0921 22:11:52.340676       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0921 22:11:52.340703       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0921 22:11:52.340862       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0921 22:11:52.341069       1 server.go:661] "Version info" version="v1.25.2"
	I0921 22:11:52.341099       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0921 22:11:52.341713       1 config.go:226] "Starting endpoint slice config controller"
	I0921 22:11:52.341738       1 config.go:317] "Starting service config controller"
	I0921 22:11:52.341749       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0921 22:11:52.341752       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0921 22:11:52.341786       1 config.go:444] "Starting node config controller"
	I0921 22:11:52.341804       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0921 22:11:52.442259       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0921 22:11:52.442300       1 shared_informer.go:262] Caches are synced for service config
	I0921 22:11:52.442317       1 shared_informer.go:262] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [8a3d458869b0951d2fe3dd93e9d05a549b058f95f84b2ad0685292ebb3428767] <==
	* E0921 22:11:35.585266       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0921 22:11:35.585270       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0921 22:11:35.585278       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0921 22:11:35.585375       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0921 22:11:35.585400       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0921 22:11:35.585404       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0921 22:11:35.585402       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0921 22:11:35.585415       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0921 22:11:35.585422       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0921 22:11:35.585423       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0921 22:11:35.585324       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0921 22:11:35.585438       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0921 22:11:36.465124       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0921 22:11:36.465238       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0921 22:11:36.477413       1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0921 22:11:36.477473       1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0921 22:11:36.492805       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0921 22:11:36.492841       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0921 22:11:36.501968       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0921 22:11:36.502004       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0921 22:11:36.596856       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0921 22:11:36.596889       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0921 22:11:36.676392       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0921 22:11:36.676437       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0921 22:11:38.681580       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2022-09-21 22:11:26 UTC, end at Wed 2022-09-21 22:15:53 UTC. --
	Sep 21 22:13:53 default-k8s-different-port-20220921221118-10174 kubelet[1301]: E0921 22:13:53.935051    1301 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:13:58 default-k8s-different-port-20220921221118-10174 kubelet[1301]: E0921 22:13:58.936152    1301 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:14:03 default-k8s-different-port-20220921221118-10174 kubelet[1301]: E0921 22:14:03.937359    1301 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:14:08 default-k8s-different-port-20220921221118-10174 kubelet[1301]: E0921 22:14:08.938674    1301 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:14:13 default-k8s-different-port-20220921221118-10174 kubelet[1301]: E0921 22:14:13.940268    1301 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:14:18 default-k8s-different-port-20220921221118-10174 kubelet[1301]: E0921 22:14:18.940990    1301 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:14:23 default-k8s-different-port-20220921221118-10174 kubelet[1301]: E0921 22:14:23.942067    1301 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:14:28 default-k8s-different-port-20220921221118-10174 kubelet[1301]: E0921 22:14:28.942758    1301 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:14:33 default-k8s-different-port-20220921221118-10174 kubelet[1301]: E0921 22:14:33.944402    1301 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:14:34 default-k8s-different-port-20220921221118-10174 kubelet[1301]: I0921 22:14:34.068098    1301 scope.go:115] "RemoveContainer" containerID="2d8abb3e4771063680511a2a2049ed0f4b2e8bae9c8c5229fd30401358a46f3a"
	Sep 21 22:14:38 default-k8s-different-port-20220921221118-10174 kubelet[1301]: E0921 22:14:38.945975    1301 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:14:43 default-k8s-different-port-20220921221118-10174 kubelet[1301]: E0921 22:14:43.947119    1301 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:14:48 default-k8s-different-port-20220921221118-10174 kubelet[1301]: E0921 22:14:48.947861    1301 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:14:53 default-k8s-different-port-20220921221118-10174 kubelet[1301]: E0921 22:14:53.949667    1301 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:14:58 default-k8s-different-port-20220921221118-10174 kubelet[1301]: E0921 22:14:58.950828    1301 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:15:03 default-k8s-different-port-20220921221118-10174 kubelet[1301]: E0921 22:15:03.952133    1301 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:15:08 default-k8s-different-port-20220921221118-10174 kubelet[1301]: E0921 22:15:08.952967    1301 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:15:13 default-k8s-different-port-20220921221118-10174 kubelet[1301]: E0921 22:15:13.954793    1301 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:15:18 default-k8s-different-port-20220921221118-10174 kubelet[1301]: E0921 22:15:18.956181    1301 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:15:23 default-k8s-different-port-20220921221118-10174 kubelet[1301]: E0921 22:15:23.957374    1301 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:15:28 default-k8s-different-port-20220921221118-10174 kubelet[1301]: E0921 22:15:28.958759    1301 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:15:33 default-k8s-different-port-20220921221118-10174 kubelet[1301]: E0921 22:15:33.960380    1301 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:15:38 default-k8s-different-port-20220921221118-10174 kubelet[1301]: E0921 22:15:38.962139    1301 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:15:43 default-k8s-different-port-20220921221118-10174 kubelet[1301]: E0921 22:15:43.963461    1301 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:15:48 default-k8s-different-port-20220921221118-10174 kubelet[1301]: E0921 22:15:48.964385    1301 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220921221118-10174 -n default-k8s-different-port-20220921221118-10174
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-different-port-20220921221118-10174 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: coredns-565d847f94-mrkjn storage-provisioner
helpers_test.go:272: ======> post-mortem[TestStartStop/group/default-k8s-different-port/serial/FirstStart]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context default-k8s-different-port-20220921221118-10174 describe pod coredns-565d847f94-mrkjn storage-provisioner
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20220921221118-10174 describe pod coredns-565d847f94-mrkjn storage-provisioner: exit status 1 (72.614406ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-565d847f94-mrkjn" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context default-k8s-different-port-20220921221118-10174 describe pod coredns-565d847f94-mrkjn storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/FirstStart (276.50s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (484.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-20220921220832-10174 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [f485eb78-cce4-4246-b0cd-c38f296666da] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
E0921 22:13:21.173129   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/bridge-20220921215523-10174/client.crt: no such file or directory
E0921 22:14:20.481759   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/functional-20220921213235-10174/client.crt: no such file or directory
E0921 22:14:27.009599   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/auto-20220921215523-10174/client.crt: no such file or directory
E0921 22:14:41.554480   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/ingress-addon-legacy-20220921213512-10174/client.crt: no such file or directory
E0921 22:14:41.650380   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/kindnet-20220921215523-10174/client.crt: no such file or directory
E0921 22:14:43.093597   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/bridge-20220921215523-10174/client.crt: no such file or directory
E0921 22:14:58.905626   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/enable-default-cni-20220921215523-10174/client.crt: no such file or directory
E0921 22:14:58.910921   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/enable-default-cni-20220921215523-10174/client.crt: no such file or directory
E0921 22:14:58.921197   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/enable-default-cni-20220921215523-10174/client.crt: no such file or directory
E0921 22:14:58.941481   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/enable-default-cni-20220921215523-10174/client.crt: no such file or directory
E0921 22:14:58.981757   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/enable-default-cni-20220921215523-10174/client.crt: no such file or directory
E0921 22:14:59.062752   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/enable-default-cni-20220921215523-10174/client.crt: no such file or directory
E0921 22:14:59.223132   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/enable-default-cni-20220921215523-10174/client.crt: no such file or directory
E0921 22:14:59.544217   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/enable-default-cni-20220921215523-10174/client.crt: no such file or directory
E0921 22:15:00.184396   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/enable-default-cni-20220921215523-10174/client.crt: no such file or directory
E0921 22:15:01.464843   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/enable-default-cni-20220921215523-10174/client.crt: no such file or directory
E0921 22:15:04.025605   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/enable-default-cni-20220921215523-10174/client.crt: no such file or directory
E0921 22:15:08.447634   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/addons-20220921212740-10174/client.crt: no such file or directory
E0921 22:15:09.146190   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/enable-default-cni-20220921215523-10174/client.crt: no such file or directory
E0921 22:15:19.386861   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/enable-default-cni-20220921215523-10174/client.crt: no such file or directory
E0921 22:15:39.868007   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/enable-default-cni-20220921215523-10174/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: ***** TestStartStop/group/no-preload/serial/DeployApp: pod "integration-test=busybox" failed to start within 8m0s: timed out waiting for the condition ****
start_stop_delete_test.go:196: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20220921220832-10174 -n no-preload-20220921220832-10174
start_stop_delete_test.go:196: TestStartStop/group/no-preload/serial/DeployApp: showing logs for failed pods as of 2022-09-21 22:21:15.761485873 +0000 UTC m=+3243.899937892
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-20220921220832-10174 describe po busybox -n default
start_stop_delete_test.go:196: (dbg) kubectl --context no-preload-20220921220832-10174 describe po busybox -n default:
Name:             busybox
Namespace:        default
Priority:         0
Service Account:  default
Node:             <none>
Labels:           integration-test=busybox
Annotations:      <none>
Status:           Pending
IP:               
IPs:              <none>
Containers:
busybox:
Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
Port:       <none>
Host Port:  <none>
Command:
sleep
3600
Environment:  <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8284w (ro)
Conditions:
Type           Status
PodScheduled   False 
Volumes:
kube-api-access-8284w:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age                 From               Message
----     ------            ----                ----               -------
Warning  FailedScheduling  2m47s (x2 over 8m)  default-scheduler  0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-20220921220832-10174 logs busybox -n default
start_stop_delete_test.go:196: (dbg) kubectl --context no-preload-20220921220832-10174 logs busybox -n default:
start_stop_delete_test.go:196: wait: integration-test=busybox within 8m0s: timed out waiting for the condition
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220921220832-10174
helpers_test.go:235: (dbg) docker inspect no-preload-20220921220832-10174:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d6359e799a3f8bf5c7871e4f7257a82e60fa936a5d415f8dcf227027153b841e",
	        "Created": "2022-09-21T22:08:33.259074855Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 242679,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-09-21T22:08:33.608689229Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f58fddaff4349397c9f51a6b73926a9b118af22b4ccb4492e84c74d0b59dcd4",
	        "ResolvConfPath": "/var/lib/docker/containers/d6359e799a3f8bf5c7871e4f7257a82e60fa936a5d415f8dcf227027153b841e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d6359e799a3f8bf5c7871e4f7257a82e60fa936a5d415f8dcf227027153b841e/hostname",
	        "HostsPath": "/var/lib/docker/containers/d6359e799a3f8bf5c7871e4f7257a82e60fa936a5d415f8dcf227027153b841e/hosts",
	        "LogPath": "/var/lib/docker/containers/d6359e799a3f8bf5c7871e4f7257a82e60fa936a5d415f8dcf227027153b841e/d6359e799a3f8bf5c7871e4f7257a82e60fa936a5d415f8dcf227027153b841e-json.log",
	        "Name": "/no-preload-20220921220832-10174",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-20220921220832-10174:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-20220921220832-10174",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/0d97efec5bb72252d948800969bf3c9c07c6335302b779ac376f906a108bc7bf-init/diff:/var/lib/docker/overlay2/e464f9a636cb97832b8aa5685163afb26b14d037782877c3831e0f261b6fb12b/diff:/var/lib/docker/overlay2/6f2f230003f0711dfcff3390931e14f9abd39be1cd2b674079cb6a0fc99dab7f/diff:/var/lib/docker/overlay2/a539506727a1471a83b57694b6923a137699d9d47fe00601ffa27c34d63d17c2/diff:/var/lib/docker/overlay2/7dc48fda616494e9959c05191521a1f5bdf6be0555a35fc1d7ce85b87a0522ad/diff:/var/lib/docker/overlay2/6a1c06da85f8c03c9712693ca8a75d804e7b9f135a6b92e3929385cc791b0b3a/diff:/var/lib/docker/overlay2/d178ee6cf0c027c7943422dc71b38eb93ee131daede423f6cf283424eaebc517/diff:/var/lib/docker/overlay2/f4c8d81a89934a29db99d9ed4b4542a8882465ab2b6419ba4a3fe09214d25d9e/diff:/var/lib/docker/overlay2/4411575e37772974371f716b7c4abb3053138ddcaf53d883eb3116b4c3a37f78/diff:/var/lib/docker/overlay2/a3f8db0c0d790efcede2f41e470d293e280b022ff0bbb46c8bc25f190e3146fc/diff:/var/lib/docker/overlay2/a5bfd3
703378d6d5b2622598a13b4b2eb9203659c1ffff9aa93944ef8d20d22c/diff:/var/lib/docker/overlay2/3c24c9c8a37c1a488d12209df991b3857ccbc448a3329e46a8d48fc28ef81b83/diff:/var/lib/docker/overlay2/199771963b33bc809e912a6d428c556897f0ba04db2268e08695cb7ce0ee3bad/diff:/var/lib/docker/overlay2/4bc13570e456a5d261fbaf68140f2e5b2ea10c6683623858e16ee3ed1b117ef7/diff:/var/lib/docker/overlay2/9831fdfd44eff7e3f4780d10f5480f090b388735bb188e7a1e93251b7769d8f3/diff:/var/lib/docker/overlay2/28126fe31ddfbf99d4588c5ac750503b9b6cee8f2e00781941cff5f4323d1c76/diff:/var/lib/docker/overlay2/173221e8ff5f6ec22870429dc8bb01272ade638fe47275d1daa1ce07952c8c8c/diff:/var/lib/docker/overlay2/53bc2e4fa25c7c694765cf4d57e59552b26eed732ba59b8d74b221ea0ada7479/diff:/var/lib/docker/overlay2/0175a1cb7e1c92247b6cbac270da49d811d3508e193c1c2e47bef8cb769a47bf/diff:/var/lib/docker/overlay2/1651afeb98cf83af553f3e0a4dcb564af84cb440147702fddc0133cdae08e879/diff:/var/lib/docker/overlay2/64c0563be8f8902fe0e9d207f839be98da0e23d6d839403a7e5498f0468e6b75/diff:/var/lib/d
ocker/overlay2/c9ce1f7c22e681e0e279ce7656144b4c75e2d24f7297acad69e621f2756b75e8/diff:/var/lib/docker/overlay2/469e6b8bbc078383af3dc64c254906774ab56a833c4de1dd39b46bfa27fee3b0/diff:/var/lib/docker/overlay2/797536b923ada4261904843a51188063c401089eae5fec971f1dfb3c21d000a8/diff:/var/lib/docker/overlay2/9d5cbab97d75347795fc5814806362514e12ffc998b48411ecf1d23f980badf6/diff:/var/lib/docker/overlay2/8681f41e3df379cfdc8fd52b2e5837512b8e36a64c87fb159548a876d16e691d/diff:/var/lib/docker/overlay2/78ea1917a0aca968f65d796cbc2ee95d7d190a700496b9e47b29884fe13b1bec/diff:/var/lib/docker/overlay2/c3c231a74b7992f1fdc04b623c10d7791eab4fd97c803ed2610d595097c682c2/diff:/var/lib/docker/overlay2/f7b38a2a87d3958318f3a4b244e33d8096cd001a8fcb916eec022b0679382900/diff:/var/lib/docker/overlay2/1107be3397c1dfc460decaf307982d427cc431beb7938fb0e78f4f7cbc0afd3b/diff:/var/lib/docker/overlay2/59319d0c4cc381deff6e51ba1cc718b7cfd4fc557e74b8e83bd228afca501c8c/diff:/var/lib/docker/overlay2/9d031408a1ab9825246b7bba81db2596cb92e70fbedc70fba9be1d25320
8529f/diff:/var/lib/docker/overlay2/5800b717c814431baf3cc17520b099e7e011915cdeb46ed6360ec2f831a926ec/diff:/var/lib/docker/overlay2/6660d8dfc6dbb7f41f871ff7ec7e0fdfdb2ca3480141cdd4cce92869cb5382bf/diff:/var/lib/docker/overlay2/525a566b79110ff18e78880f2652b8d9cfcfbdbf3272fedb650933fcbbf869bd/diff:/var/lib/docker/overlay2/a0730aa8e0952fc661048d3a7f986ff246a5b2a29ad4e8195970bb407e3cecfc/diff:/var/lib/docker/overlay2/f9d25765ae914f5f013b5f46292f03c39d82f675d2102904f5fabecf07189337/diff:/var/lib/docker/overlay2/b7b34d126964ff60bc0fb625da7343e53e8bb4e77d5e72e33257c28b3d395ca3/diff:/var/lib/docker/overlay2/8514d6cc3775ebdd86f683a78503419bc97346c0059ac0d48563d2edec9727f0/diff:/var/lib/docker/overlay2/dd6f7bead5718d42040b4ee90ce09fa15720d04f3c58e2964b1d5b241bf5b7ed/diff:/var/lib/docker/overlay2/1a8adeb0355a42e9284e241b527930f4f6b7108d459c10550d5d0c4451f9e924/diff:/var/lib/docker/overlay2/703991449e0070d93945e4435da56144ce54e461e877e0b084144663d46b10f6/diff:/var/lib/docker/overlay2/65855711cfa445d374537e6268fd1bea2f62ea
bf2f590842933254b4755050dd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0d97efec5bb72252d948800969bf3c9c07c6335302b779ac376f906a108bc7bf/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0d97efec5bb72252d948800969bf3c9c07c6335302b779ac376f906a108bc7bf/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0d97efec5bb72252d948800969bf3c9c07c6335302b779ac376f906a108bc7bf/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-20220921220832-10174",
	                "Source": "/var/lib/docker/volumes/no-preload-20220921220832-10174/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-20220921220832-10174",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-20220921220832-10174",
	                "name.minikube.sigs.k8s.io": "no-preload-20220921220832-10174",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "29f3429c3eccb420d534b5769179f5361b8b68686659e922bbb6d167cf1b0160",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49408"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49407"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49404"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49406"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49405"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/29f3429c3ecc",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-20220921220832-10174": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "d6359e799a3f",
	                        "no-preload-20220921220832-10174"
	                    ],
	                    "NetworkID": "40cb175bb75cdb2ff8ee942229fbc7e22e0ed7651da5bae77cd3dd1e2f70c5e3",
	                    "EndpointID": "3a727e68b6a78ddeed89a7d40cdef360d206e4656d04dab25ad21e8976c86ff4",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:5e:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220921220832-10174 -n no-preload-20220921220832-10174
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-20220921220832-10174 logs -n 25
helpers_test.go:252: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| Command |                            Args                            |                     Profile                     |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p                                   | old-k8s-version-20220921220722-10174            | jenkins | v1.27.0 | 21 Sep 22 22:09 UTC | 21 Sep 22 22:09 UTC |
	|         | old-k8s-version-20220921220722-10174                       |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                     |                     |
	| stop    | -p                                                         | old-k8s-version-20220921220722-10174            | jenkins | v1.27.0 | 21 Sep 22 22:09 UTC | 21 Sep 22 22:09 UTC |
	|         | old-k8s-version-20220921220722-10174                       |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | old-k8s-version-20220921220722-10174            | jenkins | v1.27.0 | 21 Sep 22 22:09 UTC | 21 Sep 22 22:09 UTC |
	|         | old-k8s-version-20220921220722-10174                       |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                     |                     |
	| start   | -p                                                         | old-k8s-version-20220921220722-10174            | jenkins | v1.27.0 | 21 Sep 22 22:09 UTC | 21 Sep 22 22:17 UTC |
	|         | old-k8s-version-20220921220722-10174                       |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                          |                                                 |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                              |                                                 |         |         |                     |                     |
	|         | --disable-driver-mounts                                    |                                                 |         |         |                     |                     |
	|         | --keep-context=false --driver=docker                       |                                                 |         |         |                     |                     |
	|         |  --container-runtime=containerd                            |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                               |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | enable-default-cni-20220921215523-10174         | jenkins | v1.27.0 | 21 Sep 22 22:11 UTC | 21 Sep 22 22:11 UTC |
	|         | enable-default-cni-20220921215523-10174                    |                                                 |         |         |                     |                     |
	| start   | -p                                                         | default-k8s-different-port-20220921221118-10174 | jenkins | v1.27.0 | 21 Sep 22 22:11 UTC |                     |
	|         | default-k8s-different-port-20220921221118-10174            |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true                |                                                 |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker                      |                                                 |         |         |                     |                     |
	|         |  --container-runtime=containerd                            |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.2                               |                                                 |         |         |                     |                     |
	| ssh     | -p                                                         | old-k8s-version-20220921220722-10174            | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | old-k8s-version-20220921220722-10174                       |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                     |                     |
	| pause   | -p                                                         | old-k8s-version-20220921220722-10174            | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | old-k8s-version-20220921220722-10174                       |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| unpause | -p                                                         | old-k8s-version-20220921220722-10174            | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | old-k8s-version-20220921220722-10174                       |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | old-k8s-version-20220921220722-10174            | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | old-k8s-version-20220921220722-10174                       |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | old-k8s-version-20220921220722-10174            | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | old-k8s-version-20220921220722-10174                       |                                                 |         |         |                     |                     |
	| start   | -p newest-cni-20220921221720-10174 --memory=2200           | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.2                               |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | embed-certs-20220921220439-10174                | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | embed-certs-20220921220439-10174                           |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                     |                     |
	| stop    | -p                                                         | embed-certs-20220921220439-10174                | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | embed-certs-20220921220439-10174                           |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | embed-certs-20220921220439-10174                | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | embed-certs-20220921220439-10174                           |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                     |                     |
	| start   | -p                                                         | embed-certs-20220921220439-10174                | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC |                     |
	|         | embed-certs-20220921220439-10174                           |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                     |                     |
	|         | --wait=true --embed-certs                                  |                                                 |         |         |                     |                     |
	|         | --driver=docker                                            |                                                 |         |         |                     |                     |
	|         | --container-runtime=containerd                             |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.2                               |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | newest-cni-20220921221720-10174                            |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                     |                     |
	| stop    | -p                                                         | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | newest-cni-20220921221720-10174                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | newest-cni-20220921221720-10174                            |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                     |                     |
	| start   | -p newest-cni-20220921221720-10174 --memory=2200           | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:18 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.2                               |                                                 |         |         |                     |                     |
	| ssh     | -p                                                         | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:18 UTC | 21 Sep 22 22:18 UTC |
	|         | newest-cni-20220921221720-10174                            |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                     |                     |
	| pause   | -p                                                         | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:18 UTC | 21 Sep 22 22:18 UTC |
	|         | newest-cni-20220921221720-10174                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| unpause | -p                                                         | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:18 UTC | 21 Sep 22 22:18 UTC |
	|         | newest-cni-20220921221720-10174                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:18 UTC | 21 Sep 22 22:18 UTC |
	|         | newest-cni-20220921221720-10174                            |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:18 UTC | 21 Sep 22 22:18 UTC |
	|         | newest-cni-20220921221720-10174                            |                                                 |         |         |                     |                     |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/09/21 22:17:58
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.19.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0921 22:17:58.120581  270464 out.go:296] Setting OutFile to fd 1 ...
	I0921 22:17:58.120688  270464 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:17:58.120697  270464 out.go:309] Setting ErrFile to fd 2...
	I0921 22:17:58.120702  270464 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:17:58.120845  270464 root.go:333] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/bin
	I0921 22:17:58.121392  270464 out.go:303] Setting JSON to false
	I0921 22:17:58.122972  270464 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3629,"bootTime":1663795049,"procs":558,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1017-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0921 22:17:58.123042  270464 start.go:125] virtualization: kvm guest
	I0921 22:17:58.125682  270464 out.go:177] * [newest-cni-20220921221720-10174] minikube v1.27.0 on Ubuntu 20.04 (kvm/amd64)
	I0921 22:17:58.127437  270464 notify.go:214] Checking for updates...
	I0921 22:17:58.127440  270464 out.go:177]   - MINIKUBE_LOCATION=14995
	I0921 22:17:58.128956  270464 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0921 22:17:58.130431  270464 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
	I0921 22:17:58.131871  270464 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube
	I0921 22:17:58.133348  270464 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0921 22:17:58.135018  270464 config.go:180] Loaded profile config "newest-cni-20220921221720-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.2
	I0921 22:17:58.135427  270464 driver.go:365] Setting default libvirt URI to qemu:///system
	I0921 22:17:58.168045  270464 docker.go:137] docker version: linux-20.10.18
	I0921 22:17:58.168151  270464 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:17:58.266464  270464 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:49 SystemTime:2022-09-21 22:17:58.189737336 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1017-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.18 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0921 22:17:58.266580  270464 docker.go:254] overlay module found
	I0921 22:17:58.268829  270464 out.go:177] * Using the docker driver based on existing profile
	I0921 22:17:58.270222  270464 start.go:284] selected driver: docker
	I0921 22:17:58.270256  270464 start.go:808] validating driver "docker" against &{Name:newest-cni-20220921221720-10174 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:newest-cni-20220921221720-10174 Namespace:default APIServerName:mi
nikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenA
ddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 22:17:58.270381  270464 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0921 22:17:58.271497  270464 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:17:58.368662  270464 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:49 SystemTime:2022-09-21 22:17:58.293908006 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1017-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.18 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0921 22:17:58.368948  270464 start_flags.go:886] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0921 22:17:58.368972  270464 cni.go:95] Creating CNI manager for ""
	I0921 22:17:58.368978  270464 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0921 22:17:58.368988  270464 start_flags.go:316] config:
	{Name:newest-cni-20220921221720-10174 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:newest-cni-20220921221720-10174 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26
280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 22:17:58.371546  270464 out.go:177] * Starting control plane node newest-cni-20220921221720-10174 in cluster newest-cni-20220921221720-10174
	I0921 22:17:58.373254  270464 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0921 22:17:58.375006  270464 out.go:177] * Pulling base image ...
	I0921 22:17:58.376378  270464 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime containerd
	I0921 22:17:58.376432  270464 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.2-containerd-overlay2-amd64.tar.lz4
	I0921 22:17:58.376441  270464 cache.go:57] Caching tarball of preloaded images
	I0921 22:17:58.376496  270464 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	I0921 22:17:58.376670  270464 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0921 22:17:58.376685  270464 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.2 on containerd
	I0921 22:17:58.376794  270464 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/newest-cni-20220921221720-10174/config.json ...
	I0921 22:17:58.405898  270464 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon, skipping pull
	I0921 22:17:58.405926  270464 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c exists in daemon, skipping load
	I0921 22:17:58.405944  270464 cache.go:208] Successfully downloaded all kic artifacts
	I0921 22:17:58.405982  270464 start.go:364] acquiring machines lock for newest-cni-20220921221720-10174: {Name:mk8430a9f0d2e7c62068c70c502e8bb9880fed55 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:17:58.406109  270464 start.go:368] acquired machines lock for "newest-cni-20220921221720-10174" in 88.174µs
	I0921 22:17:58.406138  270464 start.go:96] Skipping create...Using existing machine configuration
	I0921 22:17:58.406145  270464 fix.go:55] fixHost starting: 
	I0921 22:17:58.406459  270464 cli_runner.go:164] Run: docker container inspect newest-cni-20220921221720-10174 --format={{.State.Status}}
	I0921 22:17:58.433186  270464 fix.go:103] recreateIfNeeded on newest-cni-20220921221720-10174: state=Stopped err=<nil>
	W0921 22:17:58.433223  270464 fix.go:129] unexpected machine state, will restart: <nil>
	I0921 22:17:58.435481  270464 out.go:177] * Restarting existing docker container for "newest-cni-20220921221720-10174" ...
	I0921 22:17:59.476817  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:18:01.477612  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:17:58.436724  270464 cli_runner.go:164] Run: docker start newest-cni-20220921221720-10174
	I0921 22:17:58.812251  270464 cli_runner.go:164] Run: docker container inspect newest-cni-20220921221720-10174 --format={{.State.Status}}
	I0921 22:17:58.838996  270464 kic.go:415] container "newest-cni-20220921221720-10174" state is running.
	I0921 22:17:58.839361  270464 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220921221720-10174
	I0921 22:17:58.863864  270464 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/newest-cni-20220921221720-10174/config.json ...
	I0921 22:17:58.864128  270464 machine.go:88] provisioning docker machine ...
	I0921 22:17:58.864176  270464 ubuntu.go:169] provisioning hostname "newest-cni-20220921221720-10174"
	I0921 22:17:58.864232  270464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221720-10174
	I0921 22:17:58.890333  270464 main.go:134] libmachine: Using SSH client type: native
	I0921 22:17:58.890539  270464 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ec520] 0x7ef6a0 <nil>  [] 0s} 127.0.0.1 49433 <nil> <nil>}
	I0921 22:17:58.890562  270464 main.go:134] libmachine: About to run SSH command:
	sudo hostname newest-cni-20220921221720-10174 && echo "newest-cni-20220921221720-10174" | sudo tee /etc/hostname
	I0921 22:17:58.891272  270464 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47948->127.0.0.1:49433: read: connection reset by peer
	I0921 22:18:02.032680  270464 main.go:134] libmachine: SSH cmd err, output: <nil>: newest-cni-20220921221720-10174
	
	I0921 22:18:02.032755  270464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221720-10174
	I0921 22:18:02.057717  270464 main.go:134] libmachine: Using SSH client type: native
	I0921 22:18:02.057857  270464 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ec520] 0x7ef6a0 <nil>  [] 0s} 127.0.0.1 49433 <nil> <nil>}
	I0921 22:18:02.057877  270464 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-20220921221720-10174' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-20220921221720-10174/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-20220921221720-10174' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0921 22:18:02.187457  270464 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0921 22:18:02.187491  270464 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube}
	I0921 22:18:02.187537  270464 ubuntu.go:177] setting up certificates
	I0921 22:18:02.187552  270464 provision.go:83] configureAuth start
	I0921 22:18:02.187614  270464 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220921221720-10174
	I0921 22:18:02.212498  270464 provision.go:138] copyHostCerts
	I0921 22:18:02.212563  270464 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cert.pem, removing ...
	I0921 22:18:02.212582  270464 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cert.pem
	I0921 22:18:02.212646  270464 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cert.pem (1123 bytes)
	I0921 22:18:02.212744  270464 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/key.pem, removing ...
	I0921 22:18:02.212757  270464 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/key.pem
	I0921 22:18:02.212785  270464 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/key.pem (1675 bytes)
	I0921 22:18:02.212842  270464 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.pem, removing ...
	I0921 22:18:02.212852  270464 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.pem
	I0921 22:18:02.212877  270464 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.pem (1078 bytes)
	I0921 22:18:02.212920  270464 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca-key.pem org=jenkins.newest-cni-20220921221720-10174 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-20220921221720-10174]
	I0921 22:18:02.324560  270464 provision.go:172] copyRemoteCerts
	I0921 22:18:02.324626  270464 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0921 22:18:02.324668  270464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221720-10174
	I0921 22:18:02.350508  270464 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49433 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/newest-cni-20220921221720-10174/id_rsa Username:docker}
	I0921 22:18:02.443035  270464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0921 22:18:02.461197  270464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0921 22:18:02.478544  270464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0921 22:18:02.496270  270464 provision.go:86] duration metric: configureAuth took 308.708013ms
	I0921 22:18:02.496297  270464 ubuntu.go:193] setting minikube options for container-runtime
	I0921 22:18:02.496485  270464 config.go:180] Loaded profile config "newest-cni-20220921221720-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.2
	I0921 22:18:02.496503  270464 machine.go:91] provisioned docker machine in 3.63235546s
	I0921 22:18:02.496513  270464 start.go:300] post-start starting for "newest-cni-20220921221720-10174" (driver="docker")
	I0921 22:18:02.496522  270464 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0921 22:18:02.496574  270464 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0921 22:18:02.496622  270464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221720-10174
	I0921 22:18:02.522376  270464 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49433 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/newest-cni-20220921221720-10174/id_rsa Username:docker}
	I0921 22:18:02.615677  270464 ssh_runner.go:195] Run: cat /etc/os-release
	I0921 22:18:02.618648  270464 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0921 22:18:02.618675  270464 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0921 22:18:02.618684  270464 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0921 22:18:02.618689  270464 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0921 22:18:02.618700  270464 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/addons for local assets ...
	I0921 22:18:02.618752  270464 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files for local assets ...
	I0921 22:18:02.618845  270464 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/101742.pem -> 101742.pem in /etc/ssl/certs
	I0921 22:18:02.618970  270464 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0921 22:18:02.626501  270464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/101742.pem --> /etc/ssl/certs/101742.pem (1708 bytes)
	I0921 22:18:02.644714  270464 start.go:303] post-start completed in 148.187534ms
	I0921 22:18:02.644788  270464 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:18:02.644827  270464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221720-10174
	I0921 22:18:02.670911  270464 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49433 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/newest-cni-20220921221720-10174/id_rsa Username:docker}
	I0921 22:18:02.764893  270464 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:18:02.768743  270464 fix.go:57] fixHost completed within 4.362593316s
	I0921 22:18:02.768771  270464 start.go:83] releasing machines lock for "newest-cni-20220921221720-10174", held for 4.362644221s
	I0921 22:18:02.768855  270464 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220921221720-10174
	I0921 22:18:02.795436  270464 ssh_runner.go:195] Run: systemctl --version
	I0921 22:18:02.795492  270464 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0921 22:18:02.795497  270464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221720-10174
	I0921 22:18:02.795554  270464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221720-10174
	I0921 22:18:02.823494  270464 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49433 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/newest-cni-20220921221720-10174/id_rsa Username:docker}
	I0921 22:18:02.824005  270464 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49433 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/newest-cni-20220921221720-10174/id_rsa Username:docker}
	I0921 22:18:02.944546  270464 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0921 22:18:02.956593  270464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0921 22:18:02.966137  270464 docker.go:188] disabling docker service ...
	I0921 22:18:02.966187  270464 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0921 22:18:02.976524  270464 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0921 22:18:02.985862  270464 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0921 22:18:03.072821  270464 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0921 22:18:03.977007  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:18:06.477048  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:18:03.151088  270464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0921 22:18:03.160276  270464 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0921 22:18:03.173665  270464 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "registry.k8s.io/pause:3.8"|' -i /etc/containerd/config.toml"
	I0921 22:18:03.181994  270464 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0921 22:18:03.190576  270464 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0921 22:18:03.198773  270464 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0921 22:18:03.207065  270464 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0921 22:18:03.214108  270464 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0921 22:18:03.220437  270464 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0921 22:18:03.291585  270464 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0921 22:18:03.364927  270464 start.go:450] Will wait 60s for socket path /run/containerd/containerd.sock
	I0921 22:18:03.365006  270464 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0921 22:18:03.368520  270464 start.go:471] Will wait 60s for crictl version
	I0921 22:18:03.368580  270464 ssh_runner.go:195] Run: sudo crictl version
	I0921 22:18:03.394076  270464 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-09-21T22:18:03Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0921 22:18:08.977213  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:18:10.977540  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:18:12.977891  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:18:14.441010  270464 ssh_runner.go:195] Run: sudo crictl version
	I0921 22:18:14.464510  270464 start.go:480] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.8
	RuntimeApiVersion:  v1alpha2
	I0921 22:18:14.464583  270464 ssh_runner.go:195] Run: containerd --version
	I0921 22:18:14.496598  270464 ssh_runner.go:195] Run: containerd --version
	I0921 22:18:14.529347  270464 out.go:177] * Preparing Kubernetes v1.25.2 on containerd 1.6.8 ...
	I0921 22:18:14.530700  270464 cli_runner.go:164] Run: docker network inspect newest-cni-20220921221720-10174 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 22:18:14.554818  270464 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0921 22:18:14.558252  270464 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0921 22:18:14.569873  270464 out.go:177]   - kubeadm.pod-network-cidr=192.168.111.111/16
	I0921 22:18:14.979586  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:18:17.477557  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:18:14.571441  270464 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime containerd
	I0921 22:18:14.571511  270464 ssh_runner.go:195] Run: sudo crictl images --output json
	I0921 22:18:14.596174  270464 containerd.go:553] all images are preloaded for containerd runtime.
	I0921 22:18:14.596199  270464 containerd.go:467] Images already preloaded, skipping extraction
	I0921 22:18:14.596243  270464 ssh_runner.go:195] Run: sudo crictl images --output json
	I0921 22:18:14.620658  270464 containerd.go:553] all images are preloaded for containerd runtime.
	I0921 22:18:14.620687  270464 cache_images.go:84] Images are preloaded, skipping loading
	I0921 22:18:14.620752  270464 ssh_runner.go:195] Run: sudo crictl info
	I0921 22:18:14.646216  270464 cni.go:95] Creating CNI manager for ""
	I0921 22:18:14.646248  270464 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0921 22:18:14.646263  270464 kubeadm.go:87] Using pod CIDR: 192.168.111.111/16
	I0921 22:18:14.646280  270464 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:192.168.111.111/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.25.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-20220921221720-10174 NodeName:newest-cni-20220921221720-10174 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-ele
ct:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
	I0921 22:18:14.646437  270464 kubeadm.go:161] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "newest-cni-20220921221720-10174"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "192.168.111.111/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "192.168.111.111/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0921 22:18:14.646545  270464 kubeadm.go:962] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-20220921221720-10174 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.2 ClusterName:newest-cni-20220921221720-10174 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0921 22:18:14.646603  270464 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.2
	I0921 22:18:14.654277  270464 binaries.go:44] Found k8s binaries, skipping transfer
	I0921 22:18:14.654354  270464 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0921 22:18:14.660964  270464 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (561 bytes)
	I0921 22:18:14.673432  270464 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0921 22:18:14.685836  270464 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2196 bytes)
	I0921 22:18:14.698638  270464 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0921 22:18:14.701537  270464 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0921 22:18:14.710830  270464 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/newest-cni-20220921221720-10174 for IP: 192.168.76.2
	I0921 22:18:14.710932  270464 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.key
	I0921 22:18:14.710994  270464 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/proxy-client-ca.key
	I0921 22:18:14.711080  270464 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/newest-cni-20220921221720-10174/client.key
	I0921 22:18:14.711147  270464 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/newest-cni-20220921221720-10174/apiserver.key.31bdca25
	I0921 22:18:14.711222  270464 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/newest-cni-20220921221720-10174/proxy-client.key
	I0921 22:18:14.711359  270464 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/10174.pem (1338 bytes)
	W0921 22:18:14.711402  270464 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/10174_empty.pem, impossibly tiny 0 bytes
	I0921 22:18:14.711421  270464 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca-key.pem (1679 bytes)
	I0921 22:18:14.711455  270464 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem (1078 bytes)
	I0921 22:18:14.711490  270464 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/cert.pem (1123 bytes)
	I0921 22:18:14.711523  270464 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/key.pem (1675 bytes)
	I0921 22:18:14.711582  270464 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/101742.pem (1708 bytes)
	I0921 22:18:14.712338  270464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/newest-cni-20220921221720-10174/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0921 22:18:14.729559  270464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/newest-cni-20220921221720-10174/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0921 22:18:14.746255  270464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/newest-cni-20220921221720-10174/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0921 22:18:14.763711  270464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/newest-cni-20220921221720-10174/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0921 22:18:14.780973  270464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0921 22:18:14.797778  270464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0921 22:18:14.815611  270464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0921 22:18:14.833381  270464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0921 22:18:14.851012  270464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/101742.pem --> /usr/share/ca-certificates/101742.pem (1708 bytes)
	I0921 22:18:14.868746  270464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0921 22:18:14.886081  270464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/10174.pem --> /usr/share/ca-certificates/10174.pem (1338 bytes)
	I0921 22:18:14.902622  270464 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0921 22:18:14.915138  270464 ssh_runner.go:195] Run: openssl version
	I0921 22:18:14.919952  270464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/101742.pem && ln -fs /usr/share/ca-certificates/101742.pem /etc/ssl/certs/101742.pem"
	I0921 22:18:14.927571  270464 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/101742.pem
	I0921 22:18:14.930778  270464 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Sep 21 21:32 /usr/share/ca-certificates/101742.pem
	I0921 22:18:14.930831  270464 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/101742.pem
	I0921 22:18:14.935861  270464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/101742.pem /etc/ssl/certs/3ec20f2e.0"
	I0921 22:18:14.942694  270464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0921 22:18:14.950509  270464 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0921 22:18:14.953862  270464 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Sep 21 21:28 /usr/share/ca-certificates/minikubeCA.pem
	I0921 22:18:14.953908  270464 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0921 22:18:14.958697  270464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0921 22:18:14.966302  270464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10174.pem && ln -fs /usr/share/ca-certificates/10174.pem /etc/ssl/certs/10174.pem"
	I0921 22:18:14.974033  270464 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10174.pem
	I0921 22:18:14.977905  270464 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Sep 21 21:32 /usr/share/ca-certificates/10174.pem
	I0921 22:18:14.977966  270464 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10174.pem
	I0921 22:18:14.983670  270464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10174.pem /etc/ssl/certs/51391683.0"
	I0921 22:18:14.991185  270464 kubeadm.go:396] StartCluster: {Name:newest-cni-20220921221720-10174 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:newest-cni-20220921221720-10174 Namespace:default APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Sub
net: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 22:18:14.991309  270464 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0921 22:18:14.991360  270464 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0921 22:18:15.018146  270464 cri.go:87] found id: "3a90bbbf9dee102645e5d858c512cf54c2be058ce005a530ee9d36b11a784e08"
	I0921 22:18:15.018180  270464 cri.go:87] found id: "4b88290d420156bc5be9ec0174239b290253932435968d53816a4841aa62f1ec"
	I0921 22:18:15.018190  270464 cri.go:87] found id: "b495f8c2d6e405409a334bfad5c00ddfc96191b10c91ae5b864c93347ff69477"
	I0921 22:18:15.018203  270464 cri.go:87] found id: "6862142ad52d2991d0eec9bbe9984aad250d6b9511d442016601c23da4aa669f"
	I0921 22:18:15.018211  270464 cri.go:87] found id: "8e3bc0e86297d96513344dfd94f78e4c7fad81839242beb9bd803a31164bf4b6"
	I0921 22:18:15.018220  270464 cri.go:87] found id: "5e610026c9c3e8ed5f60cee9db11564bf155aa0dee2ff53200ae08b8cbeee624"
	I0921 22:18:15.018233  270464 cri.go:87] found id: ""
	I0921 22:18:15.018274  270464 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0921 22:18:15.031385  270464 cri.go:114] JSON = null
	W0921 22:18:15.031441  270464 kubeadm.go:403] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I0921 22:18:15.031513  270464 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0921 22:18:15.038800  270464 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I0921 22:18:15.038845  270464 kubeadm.go:627] restartCluster start
	I0921 22:18:15.038892  270464 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0921 22:18:15.045772  270464 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:18:15.046513  270464 kubeconfig.go:116] verify returned: extract IP: "newest-cni-20220921221720-10174" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
	I0921 22:18:15.046982  270464 kubeconfig.go:127] "newest-cni-20220921221720-10174" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig - will repair!
	I0921 22:18:15.047754  270464 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig: {Name:mkdec5faf1bb182b50a3cd5458b352f38075dc15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:18:15.049153  270464 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0921 22:18:15.056226  270464 api_server.go:165] Checking apiserver status ...
	I0921 22:18:15.056274  270464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:18:15.064290  270464 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:18:15.264705  270464 api_server.go:165] Checking apiserver status ...
	I0921 22:18:15.264804  270464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:18:15.273425  270464 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:18:15.464713  270464 api_server.go:165] Checking apiserver status ...
	I0921 22:18:15.464795  270464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:18:15.473459  270464 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:18:15.664713  270464 api_server.go:165] Checking apiserver status ...
	I0921 22:18:15.664815  270464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:18:15.673518  270464 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:18:15.864904  270464 api_server.go:165] Checking apiserver status ...
	I0921 22:18:15.864999  270464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:18:15.873682  270464 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:18:16.064894  270464 api_server.go:165] Checking apiserver status ...
	I0921 22:18:16.064973  270464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:18:16.073765  270464 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:18:16.264891  270464 api_server.go:165] Checking apiserver status ...
	I0921 22:18:16.264958  270464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:18:16.273621  270464 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:18:16.464853  270464 api_server.go:165] Checking apiserver status ...
	I0921 22:18:16.464930  270464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:18:16.473725  270464 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:18:16.664886  270464 api_server.go:165] Checking apiserver status ...
	I0921 22:18:16.664965  270464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:18:16.673397  270464 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:18:16.864465  270464 api_server.go:165] Checking apiserver status ...
	I0921 22:18:16.864572  270464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:18:16.873141  270464 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:18:17.064395  270464 api_server.go:165] Checking apiserver status ...
	I0921 22:18:17.064496  270464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:18:17.073006  270464 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:18:17.265339  270464 api_server.go:165] Checking apiserver status ...
	I0921 22:18:17.265419  270464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:18:17.274056  270464 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:18:17.465346  270464 api_server.go:165] Checking apiserver status ...
	I0921 22:18:17.465413  270464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:18:17.473854  270464 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:18:17.665165  270464 api_server.go:165] Checking apiserver status ...
	I0921 22:18:17.665252  270464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:18:17.673859  270464 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:18:17.865160  270464 api_server.go:165] Checking apiserver status ...
	I0921 22:18:17.865248  270464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:18:17.873894  270464 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:18:18.065385  270464 api_server.go:165] Checking apiserver status ...
	I0921 22:18:18.065491  270464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:18:18.073903  270464 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:18:18.073926  270464 api_server.go:165] Checking apiserver status ...
	I0921 22:18:18.073970  270464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:18:18.082276  270464 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:18:18.082307  270464 kubeadm.go:602] needs reconfigure: apiserver error: timed out waiting for the condition
	I0921 22:18:18.082315  270464 kubeadm.go:1114] stopping kube-system containers ...
	I0921 22:18:18.082329  270464 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0921 22:18:18.082383  270464 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0921 22:18:18.107225  270464 cri.go:87] found id: "3a90bbbf9dee102645e5d858c512cf54c2be058ce005a530ee9d36b11a784e08"
	I0921 22:18:18.107253  270464 cri.go:87] found id: "4b88290d420156bc5be9ec0174239b290253932435968d53816a4841aa62f1ec"
	I0921 22:18:18.107263  270464 cri.go:87] found id: "b495f8c2d6e405409a334bfad5c00ddfc96191b10c91ae5b864c93347ff69477"
	I0921 22:18:18.107274  270464 cri.go:87] found id: "6862142ad52d2991d0eec9bbe9984aad250d6b9511d442016601c23da4aa669f"
	I0921 22:18:18.107284  270464 cri.go:87] found id: "8e3bc0e86297d96513344dfd94f78e4c7fad81839242beb9bd803a31164bf4b6"
	I0921 22:18:18.107300  270464 cri.go:87] found id: "5e610026c9c3e8ed5f60cee9db11564bf155aa0dee2ff53200ae08b8cbeee624"
	I0921 22:18:18.107312  270464 cri.go:87] found id: ""
	I0921 22:18:18.107331  270464 cri.go:232] Stopping containers: [3a90bbbf9dee102645e5d858c512cf54c2be058ce005a530ee9d36b11a784e08 4b88290d420156bc5be9ec0174239b290253932435968d53816a4841aa62f1ec b495f8c2d6e405409a334bfad5c00ddfc96191b10c91ae5b864c93347ff69477 6862142ad52d2991d0eec9bbe9984aad250d6b9511d442016601c23da4aa669f 8e3bc0e86297d96513344dfd94f78e4c7fad81839242beb9bd803a31164bf4b6 5e610026c9c3e8ed5f60cee9db11564bf155aa0dee2ff53200ae08b8cbeee624]
	I0921 22:18:18.107386  270464 ssh_runner.go:195] Run: which crictl
	I0921 22:18:18.110370  270464 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop 3a90bbbf9dee102645e5d858c512cf54c2be058ce005a530ee9d36b11a784e08 4b88290d420156bc5be9ec0174239b290253932435968d53816a4841aa62f1ec b495f8c2d6e405409a334bfad5c00ddfc96191b10c91ae5b864c93347ff69477 6862142ad52d2991d0eec9bbe9984aad250d6b9511d442016601c23da4aa669f 8e3bc0e86297d96513344dfd94f78e4c7fad81839242beb9bd803a31164bf4b6 5e610026c9c3e8ed5f60cee9db11564bf155aa0dee2ff53200ae08b8cbeee624
	I0921 22:18:19.977758  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:18:21.977804  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:18:18.135605  270464 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0921 22:18:18.145590  270464 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0921 22:18:18.152746  270464 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Sep 21 22:17 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Sep 21 22:17 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2063 Sep 21 22:17 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Sep 21 22:17 /etc/kubernetes/scheduler.conf
	
	I0921 22:18:18.152791  270464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0921 22:18:18.159414  270464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0921 22:18:18.165868  270464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0921 22:18:18.172296  270464 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:18:18.172351  270464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0921 22:18:18.178817  270464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0921 22:18:18.185460  270464 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:18:18.185502  270464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0921 22:18:18.191903  270464 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0921 22:18:18.198347  270464 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0921 22:18:18.198366  270464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0921 22:18:18.243873  270464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0921 22:18:18.875476  270464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0921 22:18:19.008429  270464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0921 22:18:19.075900  270464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0921 22:18:19.188641  270464 api_server.go:51] waiting for apiserver process to appear ...
	I0921 22:18:19.188725  270464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 22:18:19.698462  270464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 22:18:20.198832  270464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 22:18:20.276477  270464 api_server.go:71] duration metric: took 1.087835442s to wait for apiserver process to appear ...
	I0921 22:18:20.276512  270464 api_server.go:87] waiting for apiserver healthz status ...
	I0921 22:18:20.276525  270464 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0921 22:18:20.276919  270464 api_server.go:256] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0921 22:18:20.777646  270464 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0921 22:18:23.481306  270464 api_server.go:266] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0921 22:18:23.481335  270464 api_server.go:102] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0921 22:18:23.777732  270464 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0921 22:18:23.783406  270464 api_server.go:266] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0921 22:18:23.783445  270464 api_server.go:102] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0921 22:18:24.277560  270464 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0921 22:18:24.282590  270464 api_server.go:266] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0921 22:18:24.282619  270464 api_server.go:102] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0921 22:18:24.777960  270464 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0921 22:18:24.783842  270464 api_server.go:266] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0921 22:18:24.791207  270464 api_server.go:140] control plane version: v1.25.2
	I0921 22:18:24.791246  270464 api_server.go:130] duration metric: took 4.514726109s to wait for apiserver health ...
	I0921 22:18:24.791260  270464 cni.go:95] Creating CNI manager for ""
	I0921 22:18:24.791271  270464 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0921 22:18:24.794022  270464 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0921 22:18:24.795591  270464 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0921 22:18:24.799816  270464 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.2/kubectl ...
	I0921 22:18:24.799850  270464 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0921 22:18:24.817623  270464 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0921 22:18:25.596243  270464 system_pods.go:43] waiting for kube-system pods to appear ...
	I0921 22:18:25.604969  270464 system_pods.go:59] 9 kube-system pods found
	I0921 22:18:25.605001  270464 system_pods.go:61] "coredns-565d847f94-k9p5n" [9a7e4a83-e11c-4abc-b3f2-ff2fd6a9a44e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0921 22:18:25.605008  270464 system_pods.go:61] "etcd-newest-cni-20220921221720-10174" [d56d7a12-a093-43ea-97b4-e5b3cb66bf02] Running
	I0921 22:18:25.605014  270464 system_pods.go:61] "kindnet-gkz8f" [4f7b5b63-e3f9-41bc-803f-cab9949dfca2] Running
	I0921 22:18:25.605020  270464 system_pods.go:61] "kube-apiserver-newest-cni-20220921221720-10174" [51948604-2d5f-405e-b3a2-26740937866d] Running
	I0921 22:18:25.605032  270464 system_pods.go:61] "kube-controller-manager-newest-cni-20220921221720-10174" [5c390a04-523c-487d-bcd5-928f33ed5b04] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0921 22:18:25.605045  270464 system_pods.go:61] "kube-proxy-47q56" [afb97502-915c-45c2-911b-22d200a8e934] Running
	I0921 22:18:25.605058  270464 system_pods.go:61] "kube-scheduler-newest-cni-20220921221720-10174" [c0e8a08e-f1a5-43e8-a9c3-0f0b36b0abcf] Running
	I0921 22:18:25.605069  270464 system_pods.go:61] "metrics-server-5c8fd5cf8-pb9zk" [92ed4169-eaf5-46ce-88e9-9b3459355c2f] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0921 22:18:25.605079  270464 system_pods.go:61] "storage-provisioner" [5f9c7750-759e-4470-9a9b-b2b0487497b1] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0921 22:18:25.605088  270464 system_pods.go:74] duration metric: took 8.816498ms to wait for pod list to return data ...
	I0921 22:18:25.605099  270464 node_conditions.go:102] verifying NodePressure condition ...
	I0921 22:18:25.607635  270464 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0921 22:18:25.607661  270464 node_conditions.go:123] node cpu capacity is 8
	I0921 22:18:25.607671  270464 node_conditions.go:105] duration metric: took 2.565892ms to run NodePressure ...
	I0921 22:18:25.607685  270464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0921 22:18:25.739086  270464 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0921 22:18:25.745886  270464 ops.go:34] apiserver oom_adj: -16
	I0921 22:18:25.745922  270464 kubeadm.go:631] restartCluster took 10.707068549s
	I0921 22:18:25.745932  270464 kubeadm.go:398] StartCluster complete in 10.754754438s
	I0921 22:18:25.745952  270464 settings.go:142] acquiring lock: {Name:mk2e017b0c75e33bad2b3a546d00214b38a21694 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:18:25.746054  270464 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
	I0921 22:18:25.747383  270464 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig: {Name:mkdec5faf1bb182b50a3cd5458b352f38075dc15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:18:25.750576  270464 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-20220921221720-10174" rescaled to 1
	I0921 22:18:25.750634  270464 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0921 22:18:25.750653  270464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0921 22:18:25.752370  270464 out.go:177] * Verifying Kubernetes components...
	I0921 22:18:25.750722  270464 addons.go:412] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0921 22:18:25.750830  270464 config.go:180] Loaded profile config "newest-cni-20220921221720-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.2
	I0921 22:18:25.753627  270464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0921 22:18:25.753641  270464 addons.go:65] Setting storage-provisioner=true in profile "newest-cni-20220921221720-10174"
	I0921 22:18:25.753647  270464 addons.go:65] Setting default-storageclass=true in profile "newest-cni-20220921221720-10174"
	I0921 22:18:25.753658  270464 addons.go:65] Setting dashboard=true in profile "newest-cni-20220921221720-10174"
	I0921 22:18:25.753668  270464 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-20220921221720-10174"
	I0921 22:18:25.753674  270464 addons.go:65] Setting metrics-server=true in profile "newest-cni-20220921221720-10174"
	I0921 22:18:25.753678  270464 addons.go:153] Setting addon dashboard=true in "newest-cni-20220921221720-10174"
	W0921 22:18:25.753687  270464 addons.go:162] addon dashboard should already be in state true
	I0921 22:18:25.753668  270464 addons.go:153] Setting addon storage-provisioner=true in "newest-cni-20220921221720-10174"
	W0921 22:18:25.753716  270464 addons.go:162] addon storage-provisioner should already be in state true
	I0921 22:18:25.753740  270464 host.go:66] Checking if "newest-cni-20220921221720-10174" exists ...
	I0921 22:18:25.753687  270464 addons.go:153] Setting addon metrics-server=true in "newest-cni-20220921221720-10174"
	W0921 22:18:25.753790  270464 addons.go:162] addon metrics-server should already be in state true
	I0921 22:18:25.753804  270464 host.go:66] Checking if "newest-cni-20220921221720-10174" exists ...
	I0921 22:18:25.753876  270464 host.go:66] Checking if "newest-cni-20220921221720-10174" exists ...
	I0921 22:18:25.754023  270464 cli_runner.go:164] Run: docker container inspect newest-cni-20220921221720-10174 --format={{.State.Status}}
	I0921 22:18:25.754230  270464 cli_runner.go:164] Run: docker container inspect newest-cni-20220921221720-10174 --format={{.State.Status}}
	I0921 22:18:25.754253  270464 cli_runner.go:164] Run: docker container inspect newest-cni-20220921221720-10174 --format={{.State.Status}}
	I0921 22:18:25.754411  270464 cli_runner.go:164] Run: docker container inspect newest-cni-20220921221720-10174 --format={{.State.Status}}
	I0921 22:18:25.791197  270464 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0921 22:18:25.794533  270464 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0921 22:18:25.794551  270464 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0921 22:18:25.795935  270464 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.6.0
	I0921 22:18:25.794597  270464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221720-10174
	I0921 22:18:25.798569  270464 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0921 22:18:25.797404  270464 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0921 22:18:25.804574  270464 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0921 22:18:25.804597  270464 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0921 22:18:25.799820  270464 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0921 22:18:25.804640  270464 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0921 22:18:25.804652  270464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221720-10174
	I0921 22:18:25.804698  270464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221720-10174
	I0921 22:18:25.804486  270464 addons.go:153] Setting addon default-storageclass=true in "newest-cni-20220921221720-10174"
	W0921 22:18:25.804760  270464 addons.go:162] addon default-storageclass should already be in state true
	I0921 22:18:25.804800  270464 host.go:66] Checking if "newest-cni-20220921221720-10174" exists ...
	I0921 22:18:25.805405  270464 cli_runner.go:164] Run: docker container inspect newest-cni-20220921221720-10174 --format={{.State.Status}}
	I0921 22:18:25.838726  270464 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49433 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/newest-cni-20220921221720-10174/id_rsa Username:docker}
	I0921 22:18:25.843025  270464 api_server.go:51] waiting for apiserver process to appear ...
	I0921 22:18:25.843082  270464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 22:18:25.843236  270464 start.go:790] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0921 22:18:25.845316  270464 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49433 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/newest-cni-20220921221720-10174/id_rsa Username:docker}
	I0921 22:18:25.848567  270464 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49433 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/newest-cni-20220921221720-10174/id_rsa Username:docker}
	I0921 22:18:25.849735  270464 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0921 22:18:25.849759  270464 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0921 22:18:25.849810  270464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221720-10174
	I0921 22:18:25.853835  270464 api_server.go:71] duration metric: took 103.168578ms to wait for apiserver process to appear ...
	I0921 22:18:25.853863  270464 api_server.go:87] waiting for apiserver healthz status ...
	I0921 22:18:25.853886  270464 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0921 22:18:25.860137  270464 api_server.go:266] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0921 22:18:25.861062  270464 api_server.go:140] control plane version: v1.25.2
	I0921 22:18:25.861081  270464 api_server.go:130] duration metric: took 7.211042ms to wait for apiserver health ...
	I0921 22:18:25.861091  270464 system_pods.go:43] waiting for kube-system pods to appear ...
	I0921 22:18:25.877319  270464 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49433 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/newest-cni-20220921221720-10174/id_rsa Username:docker}
	I0921 22:18:25.879281  270464 system_pods.go:59] 9 kube-system pods found
	I0921 22:18:25.879308  270464 system_pods.go:61] "coredns-565d847f94-k9p5n" [9a7e4a83-e11c-4abc-b3f2-ff2fd6a9a44e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0921 22:18:25.879315  270464 system_pods.go:61] "etcd-newest-cni-20220921221720-10174" [d56d7a12-a093-43ea-97b4-e5b3cb66bf02] Running
	I0921 22:18:25.879320  270464 system_pods.go:61] "kindnet-gkz8f" [4f7b5b63-e3f9-41bc-803f-cab9949dfca2] Running
	I0921 22:18:25.879326  270464 system_pods.go:61] "kube-apiserver-newest-cni-20220921221720-10174" [51948604-2d5f-405e-b3a2-26740937866d] Running
	I0921 22:18:25.879335  270464 system_pods.go:61] "kube-controller-manager-newest-cni-20220921221720-10174" [5c390a04-523c-487d-bcd5-928f33ed5b04] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0921 22:18:25.879345  270464 system_pods.go:61] "kube-proxy-47q56" [afb97502-915c-45c2-911b-22d200a8e934] Running
	I0921 22:18:25.879349  270464 system_pods.go:61] "kube-scheduler-newest-cni-20220921221720-10174" [c0e8a08e-f1a5-43e8-a9c3-0f0b36b0abcf] Running
	I0921 22:18:25.879355  270464 system_pods.go:61] "metrics-server-5c8fd5cf8-pb9zk" [92ed4169-eaf5-46ce-88e9-9b3459355c2f] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0921 22:18:25.879365  270464 system_pods.go:61] "storage-provisioner" [5f9c7750-759e-4470-9a9b-b2b0487497b1] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0921 22:18:25.879371  270464 system_pods.go:74] duration metric: took 18.275303ms to wait for pod list to return data ...
	I0921 22:18:25.879382  270464 default_sa.go:34] waiting for default service account to be created ...
	I0921 22:18:25.881525  270464 default_sa.go:45] found service account: "default"
	I0921 22:18:25.881543  270464 default_sa.go:55] duration metric: took 2.1536ms for default service account to be created ...
	I0921 22:18:25.881551  270464 kubeadm.go:573] duration metric: took 130.888457ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0921 22:18:25.881565  270464 node_conditions.go:102] verifying NodePressure condition ...
	I0921 22:18:25.884097  270464 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0921 22:18:25.884117  270464 node_conditions.go:123] node cpu capacity is 8
	I0921 22:18:25.884126  270464 node_conditions.go:105] duration metric: took 2.556931ms to run NodePressure ...
	I0921 22:18:25.884135  270464 start.go:216] waiting for startup goroutines ...
	I0921 22:18:25.945255  270464 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0921 22:18:25.945285  270464 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0921 22:18:25.949672  270464 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0921 22:18:25.949688  270464 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0921 22:18:25.949709  270464 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0921 22:18:25.959632  270464 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0921 22:18:25.959656  270464 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0921 22:18:25.963888  270464 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0921 22:18:25.963911  270464 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0921 22:18:25.974223  270464 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0921 22:18:25.974250  270464 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0921 22:18:25.978556  270464 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0921 22:18:25.978581  270464 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0921 22:18:25.978847  270464 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0921 22:18:25.995095  270464 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0921 22:18:25.995206  270464 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0921 22:18:25.995231  270464 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4206 bytes)
	I0921 22:18:26.011190  270464 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0921 22:18:26.011243  270464 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0921 22:18:26.085101  270464 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0921 22:18:26.085150  270464 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0921 22:18:26.104077  270464 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0921 22:18:26.104114  270464 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0921 22:18:26.188309  270464 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0921 22:18:26.188339  270464 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0921 22:18:26.205569  270464 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0921 22:18:26.205601  270464 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0921 22:18:26.293701  270464 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0921 22:18:26.601264  270464 addons.go:383] Verifying addon metrics-server=true in "newest-cni-20220921221720-10174"
	I0921 22:18:26.788594  270464 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0921 22:18:26.790190  270464 addons.go:414] enableAddons completed in 1.039462461s
	I0921 22:18:26.840016  270464 start.go:506] kubectl: 1.25.2, cluster: 1.25.2 (minor skew: 0)
	I0921 22:18:26.841805  270464 out.go:177] * Done! kubectl is now configured to use "newest-cni-20220921221720-10174" cluster and "default" namespace by default
	I0921 22:18:24.478339  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:18:26.978336  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:18:29.476882  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:18:31.478356  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:18:33.976605  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:18:35.976878  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:18:38.477434  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:18:40.478033  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:18:42.977925  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:18:45.477571  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:18:47.977615  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:18:49.977882  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:18:52.477206  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:18:54.477844  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:18:56.976782  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:18:58.977180  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:19:01.477897  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:19:03.977685  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:19:06.477200  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:19:08.976722  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:19:10.977610  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:19:13.476627  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:19:15.477519  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:19:17.977105  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:19:19.977626  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:19:22.477263  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:19:24.477637  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:19:26.977251  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:19:29.476688  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:19:31.477764  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:19:33.976991  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:19:35.977033  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:19:37.977441  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:19:40.477326  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:19:42.977381  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:19:44.977553  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:19:47.476449  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:19:49.477452  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:19:51.977396  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:19:54.477645  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:19:56.977376  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:19:59.477995  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:20:01.977425  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:20:03.977726  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:20:06.477894  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:20:08.977434  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:20:10.978166  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:20:13.477792  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:20:15.977558  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:20:18.477544  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:20:20.977066  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:20:22.977332  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:20:25.477402  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:20:27.977246  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:20:29.977370  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:20:32.477012  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:20:34.977231  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:20:36.977848  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:20:39.477873  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:20:41.976860  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:20:43.977103  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:20:45.977754  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:20:48.477044  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:20:50.477628  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:20:52.977796  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:20:55.476953  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:20:57.977440  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:20:59.977475  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:21:02.477190  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:21:04.977117  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:21:07.476968  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:21:09.477119  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:21:11.477503  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	8c81e8e062ce5       d921cee849482       3 minutes ago       Exited              kindnet-cni               3                   842ff71db5ddd
	3eefbcb898b09       1c7d8c51823b5       12 minutes ago      Running             kube-proxy                0                   3a052127f22d7
	6a4b91f0531d1       a8a176a5d5d69       12 minutes ago      Running             etcd                      0                   9756bf60beb90
	a9c3d39d9942f       ca0ea1ee3cfd3       12 minutes ago      Running             kube-scheduler            0                   26094dc69faf0
	b69529a7e224f       97801f8394908       12 minutes ago      Running             kube-apiserver            0                   6c9070db9088c
	b1a22ede66e31       dbfceb93c69b6       12 minutes ago      Running             kube-controller-manager   0                   649c092f0b0ca
	
	* 
	* ==> containerd <==
	* -- Logs begin at Wed 2022-09-21 22:08:33 UTC, end at Wed 2022-09-21 22:21:16 UTC. --
	Sep 21 22:14:38 no-preload-20220921220832-10174 containerd[512]: time="2022-09-21T22:14:38.125930714Z" level=warning msg="cleaning up after shim disconnected" id=4319d6b9051970d5c21e870c71eeb9d7c765b4c15cf0b862381f53977a9cc221 namespace=k8s.io
	Sep 21 22:14:38 no-preload-20220921220832-10174 containerd[512]: time="2022-09-21T22:14:38.125942180Z" level=info msg="cleaning up dead shim"
	Sep 21 22:14:38 no-preload-20220921220832-10174 containerd[512]: time="2022-09-21T22:14:38.137353933Z" level=warning msg="cleanup warnings time=\"2022-09-21T22:14:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2923 runtime=io.containerd.runc.v2\n"
	Sep 21 22:14:38 no-preload-20220921220832-10174 containerd[512]: time="2022-09-21T22:14:38.801705532Z" level=info msg="RemoveContainer for \"9b4bcc68b201c6e0c9847ec783771a1871359f04fea4ce921778c106f7361939\""
	Sep 21 22:14:38 no-preload-20220921220832-10174 containerd[512]: time="2022-09-21T22:14:38.808495036Z" level=info msg="RemoveContainer for \"9b4bcc68b201c6e0c9847ec783771a1871359f04fea4ce921778c106f7361939\" returns successfully"
	Sep 21 22:14:50 no-preload-20220921220832-10174 containerd[512]: time="2022-09-21T22:14:50.113627373Z" level=info msg="CreateContainer within sandbox \"842ff71db5ddd12bfafd846824f000dbe00a410f568767bbd1c8fb5bdb20f51e\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:2,}"
	Sep 21 22:14:50 no-preload-20220921220832-10174 containerd[512]: time="2022-09-21T22:14:50.128050108Z" level=info msg="CreateContainer within sandbox \"842ff71db5ddd12bfafd846824f000dbe00a410f568767bbd1c8fb5bdb20f51e\" for &ContainerMetadata{Name:kindnet-cni,Attempt:2,} returns container id \"543a5436a22f4e9205866f688547d1b2d796db21aeedbb33be386e241f16bda1\""
	Sep 21 22:14:50 no-preload-20220921220832-10174 containerd[512]: time="2022-09-21T22:14:50.128661254Z" level=info msg="StartContainer for \"543a5436a22f4e9205866f688547d1b2d796db21aeedbb33be386e241f16bda1\""
	Sep 21 22:14:50 no-preload-20220921220832-10174 containerd[512]: time="2022-09-21T22:14:50.195171816Z" level=info msg="StartContainer for \"543a5436a22f4e9205866f688547d1b2d796db21aeedbb33be386e241f16bda1\" returns successfully"
	Sep 21 22:17:30 no-preload-20220921220832-10174 containerd[512]: time="2022-09-21T22:17:30.728316246Z" level=info msg="shim disconnected" id=543a5436a22f4e9205866f688547d1b2d796db21aeedbb33be386e241f16bda1
	Sep 21 22:17:30 no-preload-20220921220832-10174 containerd[512]: time="2022-09-21T22:17:30.728384023Z" level=warning msg="cleaning up after shim disconnected" id=543a5436a22f4e9205866f688547d1b2d796db21aeedbb33be386e241f16bda1 namespace=k8s.io
	Sep 21 22:17:30 no-preload-20220921220832-10174 containerd[512]: time="2022-09-21T22:17:30.728394963Z" level=info msg="cleaning up dead shim"
	Sep 21 22:17:30 no-preload-20220921220832-10174 containerd[512]: time="2022-09-21T22:17:30.738340815Z" level=warning msg="cleanup warnings time=\"2022-09-21T22:17:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3040 runtime=io.containerd.runc.v2\n"
	Sep 21 22:17:31 no-preload-20220921220832-10174 containerd[512]: time="2022-09-21T22:17:31.124831173Z" level=info msg="RemoveContainer for \"4319d6b9051970d5c21e870c71eeb9d7c765b4c15cf0b862381f53977a9cc221\""
	Sep 21 22:17:31 no-preload-20220921220832-10174 containerd[512]: time="2022-09-21T22:17:31.130281615Z" level=info msg="RemoveContainer for \"4319d6b9051970d5c21e870c71eeb9d7c765b4c15cf0b862381f53977a9cc221\" returns successfully"
	Sep 21 22:17:54 no-preload-20220921220832-10174 containerd[512]: time="2022-09-21T22:17:54.114835667Z" level=info msg="CreateContainer within sandbox \"842ff71db5ddd12bfafd846824f000dbe00a410f568767bbd1c8fb5bdb20f51e\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:3,}"
	Sep 21 22:17:54 no-preload-20220921220832-10174 containerd[512]: time="2022-09-21T22:17:54.127471150Z" level=info msg="CreateContainer within sandbox \"842ff71db5ddd12bfafd846824f000dbe00a410f568767bbd1c8fb5bdb20f51e\" for &ContainerMetadata{Name:kindnet-cni,Attempt:3,} returns container id \"8c81e8e062ce578d93aaa7742d6ea9313a45d09b7520be04efd4ddd67bd4734b\""
	Sep 21 22:17:54 no-preload-20220921220832-10174 containerd[512]: time="2022-09-21T22:17:54.128003409Z" level=info msg="StartContainer for \"8c81e8e062ce578d93aaa7742d6ea9313a45d09b7520be04efd4ddd67bd4734b\""
	Sep 21 22:17:54 no-preload-20220921220832-10174 containerd[512]: time="2022-09-21T22:17:54.203756780Z" level=info msg="StartContainer for \"8c81e8e062ce578d93aaa7742d6ea9313a45d09b7520be04efd4ddd67bd4734b\" returns successfully"
	Sep 21 22:20:34 no-preload-20220921220832-10174 containerd[512]: time="2022-09-21T22:20:34.725730306Z" level=info msg="shim disconnected" id=8c81e8e062ce578d93aaa7742d6ea9313a45d09b7520be04efd4ddd67bd4734b
	Sep 21 22:20:34 no-preload-20220921220832-10174 containerd[512]: time="2022-09-21T22:20:34.725808516Z" level=warning msg="cleaning up after shim disconnected" id=8c81e8e062ce578d93aaa7742d6ea9313a45d09b7520be04efd4ddd67bd4734b namespace=k8s.io
	Sep 21 22:20:34 no-preload-20220921220832-10174 containerd[512]: time="2022-09-21T22:20:34.725820081Z" level=info msg="cleaning up dead shim"
	Sep 21 22:20:34 no-preload-20220921220832-10174 containerd[512]: time="2022-09-21T22:20:34.735680498Z" level=warning msg="cleanup warnings time=\"2022-09-21T22:20:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3152 runtime=io.containerd.runc.v2\n"
	Sep 21 22:20:35 no-preload-20220921220832-10174 containerd[512]: time="2022-09-21T22:20:35.469852172Z" level=info msg="RemoveContainer for \"543a5436a22f4e9205866f688547d1b2d796db21aeedbb33be386e241f16bda1\""
	Sep 21 22:20:35 no-preload-20220921220832-10174 containerd[512]: time="2022-09-21T22:20:35.475973253Z" level=info msg="RemoveContainer for \"543a5436a22f4e9205866f688547d1b2d796db21aeedbb33be386e241f16bda1\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-20220921220832-10174
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-20220921220832-10174
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=937c68716dfaac5b5ffa3b6655158d5d3472b8c4
	                    minikube.k8s.io/name=no-preload-20220921220832-10174
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_09_21T22_08_58_0700
	                    minikube.k8s.io/version=v1.27.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 21 Sep 2022 22:08:55 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-20220921220832-10174
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 21 Sep 2022 22:21:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 21 Sep 2022 22:19:40 +0000   Wed, 21 Sep 2022 22:08:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 21 Sep 2022 22:19:40 +0000   Wed, 21 Sep 2022 22:08:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 21 Sep 2022 22:19:40 +0000   Wed, 21 Sep 2022 22:08:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 21 Sep 2022 22:19:40 +0000   Wed, 21 Sep 2022 22:08:52 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-20220921220832-10174
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871748Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871748Ki
	  pods:               110
	System Info:
	  Machine ID:                 40026c506a9948afaae3b0fda0a61c83
	  System UUID:                44c6c62a-5061-4f07-a2f0-9d563da1b73e
	  Boot ID:                    878f3e16-9143-42f3-b848-1a740c7f26bf
	  Kernel Version:             5.15.0-1017-gcp
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.6.8
	  Kubelet Version:            v1.25.2
	  Kube-Proxy Version:         v1.25.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-no-preload-20220921220832-10174                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-27cj5                                              100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      12m
	  kube-system                 kube-apiserver-no-preload-20220921220832-10174             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-no-preload-20220921220832-10174    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-nxpf5                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-no-preload-20220921220832-10174             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  NodeHasSufficientMemory  12m (x5 over 12m)  kubelet          Node no-preload-20220921220832-10174 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x5 over 12m)  kubelet          Node no-preload-20220921220832-10174 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x3 over 12m)  kubelet          Node no-preload-20220921220832-10174 status is now: NodeHasSufficientPID
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node no-preload-20220921220832-10174 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node no-preload-20220921220832-10174 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet          Node no-preload-20220921220832-10174 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           12m                node-controller  Node no-preload-20220921220832-10174 event: Registered Node no-preload-20220921220832-10174 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000005] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +2.959863] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.003881] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.023897] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[Sep21 22:10] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.005087] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[Sep21 22:11] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +2.967845] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.031851] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.027935] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +2.943864] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.019893] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.023889] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	
	* 
	* ==> etcd [6a4b91f0531d15b584576f58078373b62517efb45934f9088111d572c5de1646] <==
	* {"level":"info","ts":"2022-09-21T22:09:12.389Z","caller":"traceutil/trace.go:171","msg":"trace[281568925] transaction","detail":"{read_only:false; response_revision:335; number_of_response:1; }","duration":"276.956319ms","start":"2022-09-21T22:09:12.112Z","end":"2022-09-21T22:09:12.389Z","steps":["trace[281568925] 'process raft request'  (duration: 276.78745ms)"],"step_count":1}
	{"level":"info","ts":"2022-09-21T22:09:12.390Z","caller":"traceutil/trace.go:171","msg":"trace[34397509] linearizableReadLoop","detail":"{readStateIndex:349; appliedIndex:344; }","duration":"274.143178ms","start":"2022-09-21T22:09:12.115Z","end":"2022-09-21T22:09:12.390Z","steps":["trace[34397509] 'read index received'  (duration: 184.947899ms)","trace[34397509] 'applied index is now lower than readState.Index'  (duration: 89.194459ms)"],"step_count":2}
	{"level":"info","ts":"2022-09-21T22:09:12.390Z","caller":"traceutil/trace.go:171","msg":"trace[86480274] transaction","detail":"{read_only:false; response_revision:338; number_of_response:1; }","duration":"267.792108ms","start":"2022-09-21T22:09:12.122Z","end":"2022-09-21T22:09:12.390Z","steps":["trace[86480274] 'process raft request'  (duration: 267.534906ms)"],"step_count":1}
	{"level":"info","ts":"2022-09-21T22:09:12.390Z","caller":"traceutil/trace.go:171","msg":"trace[1151117429] transaction","detail":"{read_only:false; response_revision:337; number_of_response:1; }","duration":"270.92859ms","start":"2022-09-21T22:09:12.119Z","end":"2022-09-21T22:09:12.390Z","steps":["trace[1151117429] 'process raft request'  (duration: 270.611323ms)"],"step_count":1}
	{"level":"warn","ts":"2022-09-21T22:09:12.390Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"274.275236ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/kube-system/kube-proxy\" ","response":"range_response_count:1 size:2878"}
	{"level":"info","ts":"2022-09-21T22:09:12.390Z","caller":"traceutil/trace.go:171","msg":"trace[490030946] range","detail":"{range_begin:/registry/daemonsets/kube-system/kube-proxy; range_end:; response_count:1; response_revision:338; }","duration":"274.326664ms","start":"2022-09-21T22:09:12.115Z","end":"2022-09-21T22:09:12.390Z","steps":["trace[490030946] 'agreement among raft nodes before linearized reading'  (duration: 274.229021ms)"],"step_count":1}
	{"level":"warn","ts":"2022-09-21T22:09:12.398Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"281.551371ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/kube-system/kindnet\" ","response":"range_response_count:1 size:4601"}
	{"level":"info","ts":"2022-09-21T22:09:12.398Z","caller":"traceutil/trace.go:171","msg":"trace[21787300] range","detail":"{range_begin:/registry/daemonsets/kube-system/kindnet; range_end:; response_count:1; response_revision:338; }","duration":"281.619065ms","start":"2022-09-21T22:09:12.116Z","end":"2022-09-21T22:09:12.398Z","steps":["trace[21787300] 'agreement among raft nodes before linearized reading'  (duration: 281.515501ms)"],"step_count":1}
	{"level":"info","ts":"2022-09-21T22:09:12.516Z","caller":"traceutil/trace.go:171","msg":"trace[1947667569] linearizableReadLoop","detail":"{readStateIndex:355; appliedIndex:355; }","duration":"118.331899ms","start":"2022-09-21T22:09:12.397Z","end":"2022-09-21T22:09:12.516Z","steps":["trace[1947667569] 'read index received'  (duration: 118.31338ms)","trace[1947667569] 'applied index is now lower than readState.Index'  (duration: 15.637µs)"],"step_count":2}
	{"level":"warn","ts":"2022-09-21T22:09:12.518Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"125.444129ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kindnet-27cj5\" ","response":"range_response_count:1 size:3714"}
	{"level":"info","ts":"2022-09-21T22:09:12.518Z","caller":"traceutil/trace.go:171","msg":"trace[877726603] range","detail":"{range_begin:/registry/pods/kube-system/kindnet-27cj5; range_end:; response_count:1; response_revision:342; }","duration":"125.520607ms","start":"2022-09-21T22:09:12.392Z","end":"2022-09-21T22:09:12.518Z","steps":["trace[877726603] 'agreement among raft nodes before linearized reading'  (duration: 123.409684ms)"],"step_count":1}
	{"level":"warn","ts":"2022-09-21T22:09:12.518Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"124.770564ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/kube-system/kube-proxy\" ","response":"range_response_count:1 size:2878"}
	{"level":"info","ts":"2022-09-21T22:09:12.518Z","caller":"traceutil/trace.go:171","msg":"trace[1569897163] range","detail":"{range_begin:/registry/daemonsets/kube-system/kube-proxy; range_end:; response_count:1; response_revision:342; }","duration":"124.848752ms","start":"2022-09-21T22:09:12.393Z","end":"2022-09-21T22:09:12.518Z","steps":["trace[1569897163] 'agreement among raft nodes before linearized reading'  (duration: 122.552703ms)"],"step_count":1}
	{"level":"info","ts":"2022-09-21T22:09:12.518Z","caller":"traceutil/trace.go:171","msg":"trace[584162130] transaction","detail":"{read_only:false; response_revision:345; number_of_response:1; }","duration":"116.491437ms","start":"2022-09-21T22:09:12.402Z","end":"2022-09-21T22:09:12.518Z","steps":["trace[584162130] 'process raft request'  (duration: 116.413393ms)"],"step_count":1}
	{"level":"info","ts":"2022-09-21T22:09:12.518Z","caller":"traceutil/trace.go:171","msg":"trace[1358578711] transaction","detail":"{read_only:false; response_revision:343; number_of_response:1; }","duration":"118.162741ms","start":"2022-09-21T22:09:12.400Z","end":"2022-09-21T22:09:12.518Z","steps":["trace[1358578711] 'process raft request'  (duration: 115.718872ms)"],"step_count":1}
	{"level":"warn","ts":"2022-09-21T22:09:12.518Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"119.950399ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kindnet-27cj5\" ","response":"range_response_count:1 size:3714"}
	{"level":"info","ts":"2022-09-21T22:09:12.518Z","caller":"traceutil/trace.go:171","msg":"trace[1353864366] range","detail":"{range_begin:/registry/pods/kube-system/kindnet-27cj5; range_end:; response_count:1; response_revision:346; }","duration":"119.996483ms","start":"2022-09-21T22:09:12.398Z","end":"2022-09-21T22:09:12.518Z","steps":["trace[1353864366] 'agreement among raft nodes before linearized reading'  (duration: 119.913158ms)"],"step_count":1}
	{"level":"info","ts":"2022-09-21T22:09:12.518Z","caller":"traceutil/trace.go:171","msg":"trace[677856182] transaction","detail":"{read_only:false; response_revision:346; number_of_response:1; }","duration":"116.412604ms","start":"2022-09-21T22:09:12.402Z","end":"2022-09-21T22:09:12.518Z","steps":["trace[677856182] 'process raft request'  (duration: 116.313434ms)"],"step_count":1}
	{"level":"warn","ts":"2022-09-21T22:09:12.518Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"116.884809ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/kube-system/kindnet\" ","response":"range_response_count:1 size:4601"}
	{"level":"info","ts":"2022-09-21T22:09:12.518Z","caller":"traceutil/trace.go:171","msg":"trace[39557479] transaction","detail":"{read_only:false; response_revision:344; number_of_response:1; }","duration":"117.099753ms","start":"2022-09-21T22:09:12.401Z","end":"2022-09-21T22:09:12.518Z","steps":["trace[39557479] 'process raft request'  (duration: 116.94402ms)"],"step_count":1}
	{"level":"info","ts":"2022-09-21T22:09:12.518Z","caller":"traceutil/trace.go:171","msg":"trace[1861985295] range","detail":"{range_begin:/registry/daemonsets/kube-system/kindnet; range_end:; response_count:1; response_revision:346; }","duration":"116.918724ms","start":"2022-09-21T22:09:12.401Z","end":"2022-09-21T22:09:12.518Z","steps":["trace[1861985295] 'agreement among raft nodes before linearized reading'  (duration: 116.853502ms)"],"step_count":1}
	{"level":"warn","ts":"2022-09-21T22:09:12.518Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"120.054969ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-nxpf5\" ","response":"range_response_count:1 size:4456"}
	{"level":"info","ts":"2022-09-21T22:09:12.518Z","caller":"traceutil/trace.go:171","msg":"trace[1034205839] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-nxpf5; range_end:; response_count:1; response_revision:346; }","duration":"120.090347ms","start":"2022-09-21T22:09:12.398Z","end":"2022-09-21T22:09:12.518Z","steps":["trace[1034205839] 'agreement among raft nodes before linearized reading'  (duration: 120.027308ms)"],"step_count":1}
	{"level":"info","ts":"2022-09-21T22:18:52.490Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":452}
	{"level":"info","ts":"2022-09-21T22:18:52.490Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":452,"took":"357.608µs"}
	
	* 
	* ==> kernel <==
	*  22:21:17 up  1:03,  0 users,  load average: 0.36, 1.10, 1.78
	Linux no-preload-20220921220832-10174 5.15.0-1017-gcp #23~20.04.2-Ubuntu SMP Wed Aug 17 02:46:40 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kube-apiserver [b69529a7e224f2addaf76a8ffcfd12b9dfc7f85844417e58938c8a2a1dc26be0] <==
	* I0921 22:08:55.091280       1 controller.go:616] quota admission added evaluator for: namespaces
	I0921 22:08:55.162042       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0921 22:08:55.175656       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0921 22:08:55.175803       1 cache.go:39] Caches are synced for autoregister controller
	I0921 22:08:55.175922       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0921 22:08:55.176029       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0921 22:08:55.176068       1 apf_controller.go:305] Running API Priority and Fairness config worker
	I0921 22:08:55.176224       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0921 22:08:55.756064       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0921 22:08:55.965619       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0921 22:08:55.968635       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0921 22:08:55.968658       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0921 22:08:56.332396       1 controller.go:616] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0921 22:08:56.376266       1 controller.go:616] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0921 22:08:56.506135       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0921 22:08:56.511625       1 lease.go:250] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I0921 22:08:56.512520       1 controller.go:616] quota admission added evaluator for: endpoints
	I0921 22:08:56.515761       1 controller.go:616] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0921 22:08:57.004401       1 controller.go:616] quota admission added evaluator for: serviceaccounts
	I0921 22:08:57.961688       1 controller.go:616] quota admission added evaluator for: deployments.apps
	I0921 22:08:57.968176       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0921 22:08:57.975887       1 controller.go:616] quota admission added evaluator for: daemonsets.apps
	I0921 22:08:58.080449       1 controller.go:616] quota admission added evaluator for: leases.coordination.k8s.io
	I0921 22:09:11.440196       1 controller.go:616] quota admission added evaluator for: controllerrevisions.apps
	I0921 22:09:11.442903       1 controller.go:616] quota admission added evaluator for: replicasets.apps
	
	* 
	* ==> kube-controller-manager [b1a22ede66e3113827144deb16ed1bddbe2e6b60af54e01396e6daf924a4c409] <==
	* I0921 22:09:10.783384       1 taint_manager.go:204] "Starting NoExecuteTaintManager"
	W0921 22:09:10.783442       1 node_lifecycle_controller.go:1058] Missing timestamp for Node no-preload-20220921220832-10174. Assuming now as a timestamp.
	I0921 22:09:10.783454       1 taint_manager.go:209] "Sending events to api server"
	I0921 22:09:10.783502       1 event.go:294] "Event occurred" object="no-preload-20220921220832-10174" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node no-preload-20220921220832-10174 event: Registered Node no-preload-20220921220832-10174 in Controller"
	I0921 22:09:10.783512       1 node_lifecycle_controller.go:1209] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0921 22:09:10.827001       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I0921 22:09:10.840033       1 range_allocator.go:367] Set node no-preload-20220921220832-10174 PodCIDR to [10.244.0.0/24]
	I0921 22:09:10.840385       1 shared_informer.go:262] Caches are synced for resource quota
	I0921 22:09:10.865038       1 shared_informer.go:262] Caches are synced for resource quota
	I0921 22:09:10.880243       1 shared_informer.go:262] Caches are synced for crt configmap
	I0921 22:09:10.932197       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
	I0921 22:09:10.932227       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
	I0921 22:09:10.932229       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I0921 22:09:10.932439       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0921 22:09:10.932555       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I0921 22:09:11.261700       1 shared_informer.go:262] Caches are synced for garbage collector
	I0921 22:09:11.282960       1 shared_informer.go:262] Caches are synced for garbage collector
	I0921 22:09:11.282991       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0921 22:09:11.720390       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-565d847f94 to 2"
	I0921 22:09:11.808535       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-nxpf5"
	I0921 22:09:11.808562       1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-27cj5"
	I0921 22:09:12.391915       1 event.go:294] "Event occurred" object="kube-system/coredns-565d847f94" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-565d847f94-9864v"
	I0921 22:09:12.399398       1 event.go:294] "Event occurred" object="kube-system/coredns-565d847f94" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-565d847f94-m8xgt"
	I0921 22:09:12.734086       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-565d847f94 to 1 from 2"
	I0921 22:09:12.739609       1 event.go:294] "Event occurred" object="kube-system/coredns-565d847f94" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-565d847f94-9864v"
	
	* 
	* ==> kube-proxy [3eefbcb898b092a06a25d126ffaed982346f2b3691953b1cfaddc748fee80843] <==
	* I0921 22:09:13.034514       1 node.go:163] Successfully retrieved node IP: 192.168.94.2
	I0921 22:09:13.034595       1 server_others.go:138] "Detected node IP" address="192.168.94.2"
	I0921 22:09:13.034633       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0921 22:09:13.054326       1 server_others.go:206] "Using iptables Proxier"
	I0921 22:09:13.054377       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0921 22:09:13.054390       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0921 22:09:13.054418       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0921 22:09:13.054463       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0921 22:09:13.054692       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0921 22:09:13.055025       1 server.go:661] "Version info" version="v1.25.2"
	I0921 22:09:13.055049       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0921 22:09:13.055668       1 config.go:226] "Starting endpoint slice config controller"
	I0921 22:09:13.055697       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0921 22:09:13.055773       1 config.go:444] "Starting node config controller"
	I0921 22:09:13.055782       1 config.go:317] "Starting service config controller"
	I0921 22:09:13.055817       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0921 22:09:13.055807       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0921 22:09:13.156647       1 shared_informer.go:262] Caches are synced for node config
	I0921 22:09:13.156676       1 shared_informer.go:262] Caches are synced for service config
	I0921 22:09:13.156693       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [a9c3d39d9942f75aca084150dc32f191aed558a4eca281497df9885089a3cacb] <==
	* W0921 22:08:55.106405       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0921 22:08:55.106631       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0921 22:08:55.106455       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0921 22:08:55.106648       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0921 22:08:55.106452       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0921 22:08:55.106664       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0921 22:08:55.106822       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0921 22:08:55.106832       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0921 22:08:55.106813       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0921 22:08:55.106848       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0921 22:08:55.106851       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0921 22:08:55.106908       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0921 22:08:55.106964       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0921 22:08:55.106987       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0921 22:08:55.107358       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0921 22:08:55.107467       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0921 22:08:55.937169       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0921 22:08:55.937212       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0921 22:08:56.007668       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0921 22:08:56.007706       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0921 22:08:56.040851       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0921 22:08:56.040885       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0921 22:08:56.152475       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0921 22:08:56.152518       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0921 22:08:56.597365       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2022-09-21 22:08:33 UTC, end at Wed 2022-09-21 22:21:17 UTC. --
	Sep 21 22:19:58 no-preload-20220921220832-10174 kubelet[1740]: E0921 22:19:58.453459    1740 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:20:03 no-preload-20220921220832-10174 kubelet[1740]: E0921 22:20:03.454577    1740 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:20:08 no-preload-20220921220832-10174 kubelet[1740]: E0921 22:20:08.455920    1740 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:20:13 no-preload-20220921220832-10174 kubelet[1740]: E0921 22:20:13.456944    1740 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:20:18 no-preload-20220921220832-10174 kubelet[1740]: E0921 22:20:18.458076    1740 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:20:23 no-preload-20220921220832-10174 kubelet[1740]: E0921 22:20:23.459611    1740 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:20:28 no-preload-20220921220832-10174 kubelet[1740]: E0921 22:20:28.460534    1740 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:20:33 no-preload-20220921220832-10174 kubelet[1740]: E0921 22:20:33.461879    1740 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:20:35 no-preload-20220921220832-10174 kubelet[1740]: I0921 22:20:35.468570    1740 scope.go:115] "RemoveContainer" containerID="543a5436a22f4e9205866f688547d1b2d796db21aeedbb33be386e241f16bda1"
	Sep 21 22:20:35 no-preload-20220921220832-10174 kubelet[1740]: I0921 22:20:35.468863    1740 scope.go:115] "RemoveContainer" containerID="8c81e8e062ce578d93aaa7742d6ea9313a45d09b7520be04efd4ddd67bd4734b"
	Sep 21 22:20:35 no-preload-20220921220832-10174 kubelet[1740]: E0921 22:20:35.469235    1740 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-27cj5_kube-system(90383218-a547-458a-8b5e-af84c9d2b017)\"" pod="kube-system/kindnet-27cj5" podUID=90383218-a547-458a-8b5e-af84c9d2b017
	Sep 21 22:20:38 no-preload-20220921220832-10174 kubelet[1740]: E0921 22:20:38.463058    1740 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:20:43 no-preload-20220921220832-10174 kubelet[1740]: E0921 22:20:43.464499    1740 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:20:48 no-preload-20220921220832-10174 kubelet[1740]: E0921 22:20:48.466201    1740 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:20:49 no-preload-20220921220832-10174 kubelet[1740]: I0921 22:20:49.111033    1740 scope.go:115] "RemoveContainer" containerID="8c81e8e062ce578d93aaa7742d6ea9313a45d09b7520be04efd4ddd67bd4734b"
	Sep 21 22:20:49 no-preload-20220921220832-10174 kubelet[1740]: E0921 22:20:49.111304    1740 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-27cj5_kube-system(90383218-a547-458a-8b5e-af84c9d2b017)\"" pod="kube-system/kindnet-27cj5" podUID=90383218-a547-458a-8b5e-af84c9d2b017
	Sep 21 22:20:53 no-preload-20220921220832-10174 kubelet[1740]: E0921 22:20:53.467967    1740 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:20:58 no-preload-20220921220832-10174 kubelet[1740]: E0921 22:20:58.469381    1740 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:21:03 no-preload-20220921220832-10174 kubelet[1740]: I0921 22:21:03.112041    1740 scope.go:115] "RemoveContainer" containerID="8c81e8e062ce578d93aaa7742d6ea9313a45d09b7520be04efd4ddd67bd4734b"
	Sep 21 22:21:03 no-preload-20220921220832-10174 kubelet[1740]: E0921 22:21:03.112331    1740 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-27cj5_kube-system(90383218-a547-458a-8b5e-af84c9d2b017)\"" pod="kube-system/kindnet-27cj5" podUID=90383218-a547-458a-8b5e-af84c9d2b017
	Sep 21 22:21:03 no-preload-20220921220832-10174 kubelet[1740]: E0921 22:21:03.471063    1740 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:21:08 no-preload-20220921220832-10174 kubelet[1740]: E0921 22:21:08.472467    1740 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:21:13 no-preload-20220921220832-10174 kubelet[1740]: E0921 22:21:13.474032    1740 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:21:14 no-preload-20220921220832-10174 kubelet[1740]: I0921 22:21:14.111497    1740 scope.go:115] "RemoveContainer" containerID="8c81e8e062ce578d93aaa7742d6ea9313a45d09b7520be04efd4ddd67bd4734b"
	Sep 21 22:21:14 no-preload-20220921220832-10174 kubelet[1740]: E0921 22:21:14.111812    1740 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-27cj5_kube-system(90383218-a547-458a-8b5e-af84c9d2b017)\"" pod="kube-system/kindnet-27cj5" podUID=90383218-a547-458a-8b5e-af84c9d2b017
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20220921220832-10174 -n no-preload-20220921220832-10174
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-20220921220832-10174 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: busybox coredns-565d847f94-m8xgt storage-provisioner
helpers_test.go:272: ======> post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context no-preload-20220921220832-10174 describe pod busybox coredns-565d847f94-m8xgt storage-provisioner
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context no-preload-20220921220832-10174 describe pod busybox coredns-565d847f94-m8xgt storage-provisioner: exit status 1 (66.261273ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8284w (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-8284w:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                   From               Message
	  ----     ------            ----                  ----               -------
	  Warning  FailedScheduling  2m49s (x2 over 8m2s)  default-scheduler  0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "coredns-565d847f94-m8xgt" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context no-preload-20220921220832-10174 describe pod busybox coredns-565d847f94-m8xgt storage-provisioner: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220921220832-10174
helpers_test.go:235: (dbg) docker inspect no-preload-20220921220832-10174:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d6359e799a3f8bf5c7871e4f7257a82e60fa936a5d415f8dcf227027153b841e",
	        "Created": "2022-09-21T22:08:33.259074855Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 242679,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-09-21T22:08:33.608689229Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f58fddaff4349397c9f51a6b73926a9b118af22b4ccb4492e84c74d0b59dcd4",
	        "ResolvConfPath": "/var/lib/docker/containers/d6359e799a3f8bf5c7871e4f7257a82e60fa936a5d415f8dcf227027153b841e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d6359e799a3f8bf5c7871e4f7257a82e60fa936a5d415f8dcf227027153b841e/hostname",
	        "HostsPath": "/var/lib/docker/containers/d6359e799a3f8bf5c7871e4f7257a82e60fa936a5d415f8dcf227027153b841e/hosts",
	        "LogPath": "/var/lib/docker/containers/d6359e799a3f8bf5c7871e4f7257a82e60fa936a5d415f8dcf227027153b841e/d6359e799a3f8bf5c7871e4f7257a82e60fa936a5d415f8dcf227027153b841e-json.log",
	        "Name": "/no-preload-20220921220832-10174",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-20220921220832-10174:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-20220921220832-10174",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/0d97efec5bb72252d948800969bf3c9c07c6335302b779ac376f906a108bc7bf-init/diff:/var/lib/docker/overlay2/e464f9a636cb97832b8aa5685163afb26b14d037782877c3831e0f261b6fb12b/diff:/var/lib/docker/overlay2/6f2f230003f0711dfcff3390931e14f9abd39be1cd2b674079cb6a0fc99dab7f/diff:/var/lib/docker/overlay2/a539506727a1471a83b57694b6923a137699d9d47fe00601ffa27c34d63d17c2/diff:/var/lib/docker/overlay2/7dc48fda616494e9959c05191521a1f5bdf6be0555a35fc1d7ce85b87a0522ad/diff:/var/lib/docker/overlay2/6a1c06da85f8c03c9712693ca8a75d804e7b9f135a6b92e3929385cc791b0b3a/diff:/var/lib/docker/overlay2/d178ee6cf0c027c7943422dc71b38eb93ee131daede423f6cf283424eaebc517/diff:/var/lib/docker/overlay2/f4c8d81a89934a29db99d9ed4b4542a8882465ab2b6419ba4a3fe09214d25d9e/diff:/var/lib/docker/overlay2/4411575e37772974371f716b7c4abb3053138ddcaf53d883eb3116b4c3a37f78/diff:/var/lib/docker/overlay2/a3f8db0c0d790efcede2f41e470d293e280b022ff0bbb46c8bc25f190e3146fc/diff:/var/lib/docker/overlay2/a5bfd3
703378d6d5b2622598a13b4b2eb9203659c1ffff9aa93944ef8d20d22c/diff:/var/lib/docker/overlay2/3c24c9c8a37c1a488d12209df991b3857ccbc448a3329e46a8d48fc28ef81b83/diff:/var/lib/docker/overlay2/199771963b33bc809e912a6d428c556897f0ba04db2268e08695cb7ce0ee3bad/diff:/var/lib/docker/overlay2/4bc13570e456a5d261fbaf68140f2e5b2ea10c6683623858e16ee3ed1b117ef7/diff:/var/lib/docker/overlay2/9831fdfd44eff7e3f4780d10f5480f090b388735bb188e7a1e93251b7769d8f3/diff:/var/lib/docker/overlay2/28126fe31ddfbf99d4588c5ac750503b9b6cee8f2e00781941cff5f4323d1c76/diff:/var/lib/docker/overlay2/173221e8ff5f6ec22870429dc8bb01272ade638fe47275d1daa1ce07952c8c8c/diff:/var/lib/docker/overlay2/53bc2e4fa25c7c694765cf4d57e59552b26eed732ba59b8d74b221ea0ada7479/diff:/var/lib/docker/overlay2/0175a1cb7e1c92247b6cbac270da49d811d3508e193c1c2e47bef8cb769a47bf/diff:/var/lib/docker/overlay2/1651afeb98cf83af553f3e0a4dcb564af84cb440147702fddc0133cdae08e879/diff:/var/lib/docker/overlay2/64c0563be8f8902fe0e9d207f839be98da0e23d6d839403a7e5498f0468e6b75/diff:/var/lib/d
ocker/overlay2/c9ce1f7c22e681e0e279ce7656144b4c75e2d24f7297acad69e621f2756b75e8/diff:/var/lib/docker/overlay2/469e6b8bbc078383af3dc64c254906774ab56a833c4de1dd39b46bfa27fee3b0/diff:/var/lib/docker/overlay2/797536b923ada4261904843a51188063c401089eae5fec971f1dfb3c21d000a8/diff:/var/lib/docker/overlay2/9d5cbab97d75347795fc5814806362514e12ffc998b48411ecf1d23f980badf6/diff:/var/lib/docker/overlay2/8681f41e3df379cfdc8fd52b2e5837512b8e36a64c87fb159548a876d16e691d/diff:/var/lib/docker/overlay2/78ea1917a0aca968f65d796cbc2ee95d7d190a700496b9e47b29884fe13b1bec/diff:/var/lib/docker/overlay2/c3c231a74b7992f1fdc04b623c10d7791eab4fd97c803ed2610d595097c682c2/diff:/var/lib/docker/overlay2/f7b38a2a87d3958318f3a4b244e33d8096cd001a8fcb916eec022b0679382900/diff:/var/lib/docker/overlay2/1107be3397c1dfc460decaf307982d427cc431beb7938fb0e78f4f7cbc0afd3b/diff:/var/lib/docker/overlay2/59319d0c4cc381deff6e51ba1cc718b7cfd4fc557e74b8e83bd228afca501c8c/diff:/var/lib/docker/overlay2/9d031408a1ab9825246b7bba81db2596cb92e70fbedc70fba9be1d25320
8529f/diff:/var/lib/docker/overlay2/5800b717c814431baf3cc17520b099e7e011915cdeb46ed6360ec2f831a926ec/diff:/var/lib/docker/overlay2/6660d8dfc6dbb7f41f871ff7ec7e0fdfdb2ca3480141cdd4cce92869cb5382bf/diff:/var/lib/docker/overlay2/525a566b79110ff18e78880f2652b8d9cfcfbdbf3272fedb650933fcbbf869bd/diff:/var/lib/docker/overlay2/a0730aa8e0952fc661048d3a7f986ff246a5b2a29ad4e8195970bb407e3cecfc/diff:/var/lib/docker/overlay2/f9d25765ae914f5f013b5f46292f03c39d82f675d2102904f5fabecf07189337/diff:/var/lib/docker/overlay2/b7b34d126964ff60bc0fb625da7343e53e8bb4e77d5e72e33257c28b3d395ca3/diff:/var/lib/docker/overlay2/8514d6cc3775ebdd86f683a78503419bc97346c0059ac0d48563d2edec9727f0/diff:/var/lib/docker/overlay2/dd6f7bead5718d42040b4ee90ce09fa15720d04f3c58e2964b1d5b241bf5b7ed/diff:/var/lib/docker/overlay2/1a8adeb0355a42e9284e241b527930f4f6b7108d459c10550d5d0c4451f9e924/diff:/var/lib/docker/overlay2/703991449e0070d93945e4435da56144ce54e461e877e0b084144663d46b10f6/diff:/var/lib/docker/overlay2/65855711cfa445d374537e6268fd1bea2f62ea
bf2f590842933254b4755050dd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0d97efec5bb72252d948800969bf3c9c07c6335302b779ac376f906a108bc7bf/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0d97efec5bb72252d948800969bf3c9c07c6335302b779ac376f906a108bc7bf/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0d97efec5bb72252d948800969bf3c9c07c6335302b779ac376f906a108bc7bf/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-20220921220832-10174",
	                "Source": "/var/lib/docker/volumes/no-preload-20220921220832-10174/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-20220921220832-10174",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-20220921220832-10174",
	                "name.minikube.sigs.k8s.io": "no-preload-20220921220832-10174",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "29f3429c3eccb420d534b5769179f5361b8b68686659e922bbb6d167cf1b0160",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49408"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49407"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49404"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49406"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49405"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/29f3429c3ecc",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-20220921220832-10174": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "d6359e799a3f",
	                        "no-preload-20220921220832-10174"
	                    ],
	                    "NetworkID": "40cb175bb75cdb2ff8ee942229fbc7e22e0ed7651da5bae77cd3dd1e2f70c5e3",
	                    "EndpointID": "3a727e68b6a78ddeed89a7d40cdef360d206e4656d04dab25ad21e8976c86ff4",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:5e:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220921220832-10174 -n no-preload-20220921220832-10174
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-20220921220832-10174 logs -n 25
helpers_test.go:252: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| Command |                            Args                            |                     Profile                     |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p                                   | old-k8s-version-20220921220722-10174            | jenkins | v1.27.0 | 21 Sep 22 22:09 UTC | 21 Sep 22 22:09 UTC |
	|         | old-k8s-version-20220921220722-10174                       |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                     |                     |
	| stop    | -p                                                         | old-k8s-version-20220921220722-10174            | jenkins | v1.27.0 | 21 Sep 22 22:09 UTC | 21 Sep 22 22:09 UTC |
	|         | old-k8s-version-20220921220722-10174                       |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | old-k8s-version-20220921220722-10174            | jenkins | v1.27.0 | 21 Sep 22 22:09 UTC | 21 Sep 22 22:09 UTC |
	|         | old-k8s-version-20220921220722-10174                       |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                     |                     |
	| start   | -p                                                         | old-k8s-version-20220921220722-10174            | jenkins | v1.27.0 | 21 Sep 22 22:09 UTC | 21 Sep 22 22:17 UTC |
	|         | old-k8s-version-20220921220722-10174                       |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                          |                                                 |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                              |                                                 |         |         |                     |                     |
	|         | --disable-driver-mounts                                    |                                                 |         |         |                     |                     |
	|         | --keep-context=false --driver=docker                       |                                                 |         |         |                     |                     |
	|         |  --container-runtime=containerd                            |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                               |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | enable-default-cni-20220921215523-10174         | jenkins | v1.27.0 | 21 Sep 22 22:11 UTC | 21 Sep 22 22:11 UTC |
	|         | enable-default-cni-20220921215523-10174                    |                                                 |         |         |                     |                     |
	| start   | -p                                                         | default-k8s-different-port-20220921221118-10174 | jenkins | v1.27.0 | 21 Sep 22 22:11 UTC |                     |
	|         | default-k8s-different-port-20220921221118-10174            |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true                |                                                 |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker                      |                                                 |         |         |                     |                     |
	|         |  --container-runtime=containerd                            |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.2                               |                                                 |         |         |                     |                     |
	| ssh     | -p                                                         | old-k8s-version-20220921220722-10174            | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | old-k8s-version-20220921220722-10174                       |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                     |                     |
	| pause   | -p                                                         | old-k8s-version-20220921220722-10174            | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | old-k8s-version-20220921220722-10174                       |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| unpause | -p                                                         | old-k8s-version-20220921220722-10174            | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | old-k8s-version-20220921220722-10174                       |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | old-k8s-version-20220921220722-10174            | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | old-k8s-version-20220921220722-10174                       |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | old-k8s-version-20220921220722-10174            | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | old-k8s-version-20220921220722-10174                       |                                                 |         |         |                     |                     |
	| start   | -p newest-cni-20220921221720-10174 --memory=2200           | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.2                               |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | embed-certs-20220921220439-10174                | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | embed-certs-20220921220439-10174                           |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                     |                     |
	| stop    | -p                                                         | embed-certs-20220921220439-10174                | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | embed-certs-20220921220439-10174                           |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | embed-certs-20220921220439-10174                | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | embed-certs-20220921220439-10174                           |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                     |                     |
	| start   | -p                                                         | embed-certs-20220921220439-10174                | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC |                     |
	|         | embed-certs-20220921220439-10174                           |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                     |                     |
	|         | --wait=true --embed-certs                                  |                                                 |         |         |                     |                     |
	|         | --driver=docker                                            |                                                 |         |         |                     |                     |
	|         | --container-runtime=containerd                             |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.2                               |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | newest-cni-20220921221720-10174                            |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                     |                     |
	| stop    | -p                                                         | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | newest-cni-20220921221720-10174                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | newest-cni-20220921221720-10174                            |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                     |                     |
	| start   | -p newest-cni-20220921221720-10174 --memory=2200           | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:18 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.2                               |                                                 |         |         |                     |                     |
	| ssh     | -p                                                         | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:18 UTC | 21 Sep 22 22:18 UTC |
	|         | newest-cni-20220921221720-10174                            |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                     |                     |
	| pause   | -p                                                         | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:18 UTC | 21 Sep 22 22:18 UTC |
	|         | newest-cni-20220921221720-10174                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| unpause | -p                                                         | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:18 UTC | 21 Sep 22 22:18 UTC |
	|         | newest-cni-20220921221720-10174                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:18 UTC | 21 Sep 22 22:18 UTC |
	|         | newest-cni-20220921221720-10174                            |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:18 UTC | 21 Sep 22 22:18 UTC |
	|         | newest-cni-20220921221720-10174                            |                                                 |         |         |                     |                     |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/09/21 22:17:58
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.19.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0921 22:17:58.120581  270464 out.go:296] Setting OutFile to fd 1 ...
	I0921 22:17:58.120688  270464 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:17:58.120697  270464 out.go:309] Setting ErrFile to fd 2...
	I0921 22:17:58.120702  270464 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:17:58.120845  270464 root.go:333] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/bin
	I0921 22:17:58.121392  270464 out.go:303] Setting JSON to false
	I0921 22:17:58.122972  270464 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3629,"bootTime":1663795049,"procs":558,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1017-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0921 22:17:58.123042  270464 start.go:125] virtualization: kvm guest
	I0921 22:17:58.125682  270464 out.go:177] * [newest-cni-20220921221720-10174] minikube v1.27.0 on Ubuntu 20.04 (kvm/amd64)
	I0921 22:17:58.127437  270464 notify.go:214] Checking for updates...
	I0921 22:17:58.127440  270464 out.go:177]   - MINIKUBE_LOCATION=14995
	I0921 22:17:58.128956  270464 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0921 22:17:58.130431  270464 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
	I0921 22:17:58.131871  270464 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube
	I0921 22:17:58.133348  270464 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0921 22:17:58.135018  270464 config.go:180] Loaded profile config "newest-cni-20220921221720-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.2
	I0921 22:17:58.135427  270464 driver.go:365] Setting default libvirt URI to qemu:///system
	I0921 22:17:58.168045  270464 docker.go:137] docker version: linux-20.10.18
	I0921 22:17:58.168151  270464 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:17:58.266464  270464 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:49 SystemTime:2022-09-21 22:17:58.189737336 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1017-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.18 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0921 22:17:58.266580  270464 docker.go:254] overlay module found
	I0921 22:17:58.268829  270464 out.go:177] * Using the docker driver based on existing profile
	I0921 22:17:58.270222  270464 start.go:284] selected driver: docker
	I0921 22:17:58.270256  270464 start.go:808] validating driver "docker" against &{Name:newest-cni-20220921221720-10174 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:newest-cni-20220921221720-10174 Namespace:default APIServerName:mi
nikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenA
ddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 22:17:58.270381  270464 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0921 22:17:58.271497  270464 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:17:58.368662  270464 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:49 SystemTime:2022-09-21 22:17:58.293908006 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1017-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.18 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0921 22:17:58.368948  270464 start_flags.go:886] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0921 22:17:58.368972  270464 cni.go:95] Creating CNI manager for ""
	I0921 22:17:58.368978  270464 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0921 22:17:58.368988  270464 start_flags.go:316] config:
	{Name:newest-cni-20220921221720-10174 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:newest-cni-20220921221720-10174 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26
280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 22:17:58.371546  270464 out.go:177] * Starting control plane node newest-cni-20220921221720-10174 in cluster newest-cni-20220921221720-10174
	I0921 22:17:58.373254  270464 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0921 22:17:58.375006  270464 out.go:177] * Pulling base image ...
	I0921 22:17:58.376378  270464 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime containerd
	I0921 22:17:58.376432  270464 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.2-containerd-overlay2-amd64.tar.lz4
	I0921 22:17:58.376441  270464 cache.go:57] Caching tarball of preloaded images
	I0921 22:17:58.376496  270464 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	I0921 22:17:58.376670  270464 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0921 22:17:58.376685  270464 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.2 on containerd
	I0921 22:17:58.376794  270464 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/newest-cni-20220921221720-10174/config.json ...
	I0921 22:17:58.405898  270464 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon, skipping pull
	I0921 22:17:58.405926  270464 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c exists in daemon, skipping load
	I0921 22:17:58.405944  270464 cache.go:208] Successfully downloaded all kic artifacts
	I0921 22:17:58.405982  270464 start.go:364] acquiring machines lock for newest-cni-20220921221720-10174: {Name:mk8430a9f0d2e7c62068c70c502e8bb9880fed55 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:17:58.406109  270464 start.go:368] acquired machines lock for "newest-cni-20220921221720-10174" in 88.174µs
	I0921 22:17:58.406138  270464 start.go:96] Skipping create...Using existing machine configuration
	I0921 22:17:58.406145  270464 fix.go:55] fixHost starting: 
	I0921 22:17:58.406459  270464 cli_runner.go:164] Run: docker container inspect newest-cni-20220921221720-10174 --format={{.State.Status}}
	I0921 22:17:58.433186  270464 fix.go:103] recreateIfNeeded on newest-cni-20220921221720-10174: state=Stopped err=<nil>
	W0921 22:17:58.433223  270464 fix.go:129] unexpected machine state, will restart: <nil>
	I0921 22:17:58.435481  270464 out.go:177] * Restarting existing docker container for "newest-cni-20220921221720-10174" ...
	I0921 22:17:59.476817  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:18:01.477612  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:17:58.436724  270464 cli_runner.go:164] Run: docker start newest-cni-20220921221720-10174
	I0921 22:17:58.812251  270464 cli_runner.go:164] Run: docker container inspect newest-cni-20220921221720-10174 --format={{.State.Status}}
	I0921 22:17:58.838996  270464 kic.go:415] container "newest-cni-20220921221720-10174" state is running.
	I0921 22:17:58.839361  270464 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220921221720-10174
	I0921 22:17:58.863864  270464 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/newest-cni-20220921221720-10174/config.json ...
	I0921 22:17:58.864128  270464 machine.go:88] provisioning docker machine ...
	I0921 22:17:58.864176  270464 ubuntu.go:169] provisioning hostname "newest-cni-20220921221720-10174"
	I0921 22:17:58.864232  270464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221720-10174
	I0921 22:17:58.890333  270464 main.go:134] libmachine: Using SSH client type: native
	I0921 22:17:58.890539  270464 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ec520] 0x7ef6a0 <nil>  [] 0s} 127.0.0.1 49433 <nil> <nil>}
	I0921 22:17:58.890562  270464 main.go:134] libmachine: About to run SSH command:
	sudo hostname newest-cni-20220921221720-10174 && echo "newest-cni-20220921221720-10174" | sudo tee /etc/hostname
	I0921 22:17:58.891272  270464 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47948->127.0.0.1:49433: read: connection reset by peer
	I0921 22:18:02.032680  270464 main.go:134] libmachine: SSH cmd err, output: <nil>: newest-cni-20220921221720-10174
	
	I0921 22:18:02.032755  270464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221720-10174
	I0921 22:18:02.057717  270464 main.go:134] libmachine: Using SSH client type: native
	I0921 22:18:02.057857  270464 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ec520] 0x7ef6a0 <nil>  [] 0s} 127.0.0.1 49433 <nil> <nil>}
	I0921 22:18:02.057877  270464 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-20220921221720-10174' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-20220921221720-10174/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-20220921221720-10174' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0921 22:18:02.187457  270464 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0921 22:18:02.187491  270464 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube}
	I0921 22:18:02.187537  270464 ubuntu.go:177] setting up certificates
	I0921 22:18:02.187552  270464 provision.go:83] configureAuth start
	I0921 22:18:02.187614  270464 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220921221720-10174
	I0921 22:18:02.212498  270464 provision.go:138] copyHostCerts
	I0921 22:18:02.212563  270464 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cert.pem, removing ...
	I0921 22:18:02.212582  270464 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cert.pem
	I0921 22:18:02.212646  270464 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cert.pem (1123 bytes)
	I0921 22:18:02.212744  270464 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/key.pem, removing ...
	I0921 22:18:02.212757  270464 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/key.pem
	I0921 22:18:02.212785  270464 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/key.pem (1675 bytes)
	I0921 22:18:02.212842  270464 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.pem, removing ...
	I0921 22:18:02.212852  270464 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.pem
	I0921 22:18:02.212877  270464 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.pem (1078 bytes)
	I0921 22:18:02.212920  270464 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca-key.pem org=jenkins.newest-cni-20220921221720-10174 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-20220921221720-10174]
	I0921 22:18:02.324560  270464 provision.go:172] copyRemoteCerts
	I0921 22:18:02.324626  270464 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0921 22:18:02.324668  270464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221720-10174
	I0921 22:18:02.350508  270464 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49433 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/newest-cni-20220921221720-10174/id_rsa Username:docker}
	I0921 22:18:02.443035  270464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0921 22:18:02.461197  270464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0921 22:18:02.478544  270464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0921 22:18:02.496270  270464 provision.go:86] duration metric: configureAuth took 308.708013ms
	I0921 22:18:02.496297  270464 ubuntu.go:193] setting minikube options for container-runtime
	I0921 22:18:02.496485  270464 config.go:180] Loaded profile config "newest-cni-20220921221720-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.2
	I0921 22:18:02.496503  270464 machine.go:91] provisioned docker machine in 3.63235546s
	I0921 22:18:02.496513  270464 start.go:300] post-start starting for "newest-cni-20220921221720-10174" (driver="docker")
	I0921 22:18:02.496522  270464 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0921 22:18:02.496574  270464 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0921 22:18:02.496622  270464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221720-10174
	I0921 22:18:02.522376  270464 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49433 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/newest-cni-20220921221720-10174/id_rsa Username:docker}
	I0921 22:18:02.615677  270464 ssh_runner.go:195] Run: cat /etc/os-release
	I0921 22:18:02.618648  270464 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0921 22:18:02.618675  270464 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0921 22:18:02.618684  270464 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0921 22:18:02.618689  270464 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0921 22:18:02.618700  270464 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/addons for local assets ...
	I0921 22:18:02.618752  270464 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files for local assets ...
	I0921 22:18:02.618845  270464 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/101742.pem -> 101742.pem in /etc/ssl/certs
	I0921 22:18:02.618970  270464 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0921 22:18:02.626501  270464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/101742.pem --> /etc/ssl/certs/101742.pem (1708 bytes)
	I0921 22:18:02.644714  270464 start.go:303] post-start completed in 148.187534ms
	I0921 22:18:02.644788  270464 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:18:02.644827  270464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221720-10174
	I0921 22:18:02.670911  270464 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49433 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/newest-cni-20220921221720-10174/id_rsa Username:docker}
	I0921 22:18:02.764893  270464 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:18:02.768743  270464 fix.go:57] fixHost completed within 4.362593316s
	I0921 22:18:02.768771  270464 start.go:83] releasing machines lock for "newest-cni-20220921221720-10174", held for 4.362644221s
	I0921 22:18:02.768855  270464 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220921221720-10174
	I0921 22:18:02.795436  270464 ssh_runner.go:195] Run: systemctl --version
	I0921 22:18:02.795492  270464 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0921 22:18:02.795497  270464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221720-10174
	I0921 22:18:02.795554  270464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221720-10174
	I0921 22:18:02.823494  270464 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49433 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/newest-cni-20220921221720-10174/id_rsa Username:docker}
	I0921 22:18:02.824005  270464 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49433 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/newest-cni-20220921221720-10174/id_rsa Username:docker}
	I0921 22:18:02.944546  270464 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0921 22:18:02.956593  270464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0921 22:18:02.966137  270464 docker.go:188] disabling docker service ...
	I0921 22:18:02.966187  270464 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0921 22:18:02.976524  270464 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0921 22:18:02.985862  270464 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0921 22:18:03.072821  270464 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0921 22:18:03.977007  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:18:06.477048  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:18:03.151088  270464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0921 22:18:03.160276  270464 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0921 22:18:03.173665  270464 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "registry.k8s.io/pause:3.8"|' -i /etc/containerd/config.toml"
	I0921 22:18:03.181994  270464 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0921 22:18:03.190576  270464 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0921 22:18:03.198773  270464 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0921 22:18:03.207065  270464 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0921 22:18:03.214108  270464 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0921 22:18:03.220437  270464 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0921 22:18:03.291585  270464 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0921 22:18:03.364927  270464 start.go:450] Will wait 60s for socket path /run/containerd/containerd.sock
	I0921 22:18:03.365006  270464 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0921 22:18:03.368520  270464 start.go:471] Will wait 60s for crictl version
	I0921 22:18:03.368580  270464 ssh_runner.go:195] Run: sudo crictl version
	I0921 22:18:03.394076  270464 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-09-21T22:18:03Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0921 22:18:08.977213  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:18:10.977540  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:18:12.977891  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:18:14.441010  270464 ssh_runner.go:195] Run: sudo crictl version
	I0921 22:18:14.464510  270464 start.go:480] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.8
	RuntimeApiVersion:  v1alpha2
	I0921 22:18:14.464583  270464 ssh_runner.go:195] Run: containerd --version
	I0921 22:18:14.496598  270464 ssh_runner.go:195] Run: containerd --version
	I0921 22:18:14.529347  270464 out.go:177] * Preparing Kubernetes v1.25.2 on containerd 1.6.8 ...
	I0921 22:18:14.530700  270464 cli_runner.go:164] Run: docker network inspect newest-cni-20220921221720-10174 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 22:18:14.554818  270464 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0921 22:18:14.558252  270464 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0921 22:18:14.569873  270464 out.go:177]   - kubeadm.pod-network-cidr=192.168.111.111/16
	I0921 22:18:14.979586  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:18:17.477557  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:18:14.571441  270464 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime containerd
	I0921 22:18:14.571511  270464 ssh_runner.go:195] Run: sudo crictl images --output json
	I0921 22:18:14.596174  270464 containerd.go:553] all images are preloaded for containerd runtime.
	I0921 22:18:14.596199  270464 containerd.go:467] Images already preloaded, skipping extraction
	I0921 22:18:14.596243  270464 ssh_runner.go:195] Run: sudo crictl images --output json
	I0921 22:18:14.620658  270464 containerd.go:553] all images are preloaded for containerd runtime.
	I0921 22:18:14.620687  270464 cache_images.go:84] Images are preloaded, skipping loading
	I0921 22:18:14.620752  270464 ssh_runner.go:195] Run: sudo crictl info
	I0921 22:18:14.646216  270464 cni.go:95] Creating CNI manager for ""
	I0921 22:18:14.646248  270464 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0921 22:18:14.646263  270464 kubeadm.go:87] Using pod CIDR: 192.168.111.111/16
	I0921 22:18:14.646280  270464 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:192.168.111.111/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.25.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-20220921221720-10174 NodeName:newest-cni-20220921221720-10174 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-ele
ct:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
	I0921 22:18:14.646437  270464 kubeadm.go:161] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "newest-cni-20220921221720-10174"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "192.168.111.111/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "192.168.111.111/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0921 22:18:14.646545  270464 kubeadm.go:962] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-20220921221720-10174 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.2 ClusterName:newest-cni-20220921221720-10174 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0921 22:18:14.646603  270464 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.2
	I0921 22:18:14.654277  270464 binaries.go:44] Found k8s binaries, skipping transfer
	I0921 22:18:14.654354  270464 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0921 22:18:14.660964  270464 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (561 bytes)
	I0921 22:18:14.673432  270464 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0921 22:18:14.685836  270464 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2196 bytes)
	I0921 22:18:14.698638  270464 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0921 22:18:14.701537  270464 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0921 22:18:14.710830  270464 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/newest-cni-20220921221720-10174 for IP: 192.168.76.2
	I0921 22:18:14.710932  270464 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.key
	I0921 22:18:14.710994  270464 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/proxy-client-ca.key
	I0921 22:18:14.711080  270464 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/newest-cni-20220921221720-10174/client.key
	I0921 22:18:14.711147  270464 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/newest-cni-20220921221720-10174/apiserver.key.31bdca25
	I0921 22:18:14.711222  270464 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/newest-cni-20220921221720-10174/proxy-client.key
	I0921 22:18:14.711359  270464 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/10174.pem (1338 bytes)
	W0921 22:18:14.711402  270464 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/10174_empty.pem, impossibly tiny 0 bytes
	I0921 22:18:14.711421  270464 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca-key.pem (1679 bytes)
	I0921 22:18:14.711455  270464 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem (1078 bytes)
	I0921 22:18:14.711490  270464 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/cert.pem (1123 bytes)
	I0921 22:18:14.711523  270464 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/key.pem (1675 bytes)
	I0921 22:18:14.711582  270464 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/101742.pem (1708 bytes)
	I0921 22:18:14.712338  270464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/newest-cni-20220921221720-10174/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0921 22:18:14.729559  270464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/newest-cni-20220921221720-10174/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0921 22:18:14.746255  270464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/newest-cni-20220921221720-10174/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0921 22:18:14.763711  270464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/newest-cni-20220921221720-10174/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0921 22:18:14.780973  270464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0921 22:18:14.797778  270464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0921 22:18:14.815611  270464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0921 22:18:14.833381  270464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0921 22:18:14.851012  270464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/101742.pem --> /usr/share/ca-certificates/101742.pem (1708 bytes)
	I0921 22:18:14.868746  270464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0921 22:18:14.886081  270464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/10174.pem --> /usr/share/ca-certificates/10174.pem (1338 bytes)
	I0921 22:18:14.902622  270464 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0921 22:18:14.915138  270464 ssh_runner.go:195] Run: openssl version
	I0921 22:18:14.919952  270464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/101742.pem && ln -fs /usr/share/ca-certificates/101742.pem /etc/ssl/certs/101742.pem"
	I0921 22:18:14.927571  270464 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/101742.pem
	I0921 22:18:14.930778  270464 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Sep 21 21:32 /usr/share/ca-certificates/101742.pem
	I0921 22:18:14.930831  270464 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/101742.pem
	I0921 22:18:14.935861  270464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/101742.pem /etc/ssl/certs/3ec20f2e.0"
	I0921 22:18:14.942694  270464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0921 22:18:14.950509  270464 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0921 22:18:14.953862  270464 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Sep 21 21:28 /usr/share/ca-certificates/minikubeCA.pem
	I0921 22:18:14.953908  270464 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0921 22:18:14.958697  270464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0921 22:18:14.966302  270464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10174.pem && ln -fs /usr/share/ca-certificates/10174.pem /etc/ssl/certs/10174.pem"
	I0921 22:18:14.974033  270464 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10174.pem
	I0921 22:18:14.977905  270464 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Sep 21 21:32 /usr/share/ca-certificates/10174.pem
	I0921 22:18:14.977966  270464 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10174.pem
	I0921 22:18:14.983670  270464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10174.pem /etc/ssl/certs/51391683.0"
	I0921 22:18:14.991185  270464 kubeadm.go:396] StartCluster: {Name:newest-cni-20220921221720-10174 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:newest-cni-20220921221720-10174 Namespace:default APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Sub
net: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 22:18:14.991309  270464 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0921 22:18:14.991360  270464 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0921 22:18:15.018146  270464 cri.go:87] found id: "3a90bbbf9dee102645e5d858c512cf54c2be058ce005a530ee9d36b11a784e08"
	I0921 22:18:15.018180  270464 cri.go:87] found id: "4b88290d420156bc5be9ec0174239b290253932435968d53816a4841aa62f1ec"
	I0921 22:18:15.018190  270464 cri.go:87] found id: "b495f8c2d6e405409a334bfad5c00ddfc96191b10c91ae5b864c93347ff69477"
	I0921 22:18:15.018203  270464 cri.go:87] found id: "6862142ad52d2991d0eec9bbe9984aad250d6b9511d442016601c23da4aa669f"
	I0921 22:18:15.018211  270464 cri.go:87] found id: "8e3bc0e86297d96513344dfd94f78e4c7fad81839242beb9bd803a31164bf4b6"
	I0921 22:18:15.018220  270464 cri.go:87] found id: "5e610026c9c3e8ed5f60cee9db11564bf155aa0dee2ff53200ae08b8cbeee624"
	I0921 22:18:15.018233  270464 cri.go:87] found id: ""
	I0921 22:18:15.018274  270464 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0921 22:18:15.031385  270464 cri.go:114] JSON = null
	W0921 22:18:15.031441  270464 kubeadm.go:403] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I0921 22:18:15.031513  270464 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0921 22:18:15.038800  270464 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I0921 22:18:15.038845  270464 kubeadm.go:627] restartCluster start
	I0921 22:18:15.038892  270464 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0921 22:18:15.045772  270464 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:18:15.046513  270464 kubeconfig.go:116] verify returned: extract IP: "newest-cni-20220921221720-10174" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
	I0921 22:18:15.046982  270464 kubeconfig.go:127] "newest-cni-20220921221720-10174" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig - will repair!
	I0921 22:18:15.047754  270464 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig: {Name:mkdec5faf1bb182b50a3cd5458b352f38075dc15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:18:15.049153  270464 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0921 22:18:15.056226  270464 api_server.go:165] Checking apiserver status ...
	I0921 22:18:15.056274  270464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:18:15.064290  270464 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:18:15.264705  270464 api_server.go:165] Checking apiserver status ...
	I0921 22:18:15.264804  270464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:18:15.273425  270464 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:18:15.464713  270464 api_server.go:165] Checking apiserver status ...
	I0921 22:18:15.464795  270464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:18:15.473459  270464 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:18:15.664713  270464 api_server.go:165] Checking apiserver status ...
	I0921 22:18:15.664815  270464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:18:15.673518  270464 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:18:15.864904  270464 api_server.go:165] Checking apiserver status ...
	I0921 22:18:15.864999  270464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:18:15.873682  270464 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:18:16.064894  270464 api_server.go:165] Checking apiserver status ...
	I0921 22:18:16.064973  270464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:18:16.073765  270464 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:18:16.264891  270464 api_server.go:165] Checking apiserver status ...
	I0921 22:18:16.264958  270464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:18:16.273621  270464 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:18:16.464853  270464 api_server.go:165] Checking apiserver status ...
	I0921 22:18:16.464930  270464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:18:16.473725  270464 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:18:16.664886  270464 api_server.go:165] Checking apiserver status ...
	I0921 22:18:16.664965  270464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:18:16.673397  270464 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:18:16.864465  270464 api_server.go:165] Checking apiserver status ...
	I0921 22:18:16.864572  270464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:18:16.873141  270464 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:18:17.064395  270464 api_server.go:165] Checking apiserver status ...
	I0921 22:18:17.064496  270464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:18:17.073006  270464 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:18:17.265339  270464 api_server.go:165] Checking apiserver status ...
	I0921 22:18:17.265419  270464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:18:17.274056  270464 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:18:17.465346  270464 api_server.go:165] Checking apiserver status ...
	I0921 22:18:17.465413  270464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:18:17.473854  270464 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:18:17.665165  270464 api_server.go:165] Checking apiserver status ...
	I0921 22:18:17.665252  270464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:18:17.673859  270464 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:18:17.865160  270464 api_server.go:165] Checking apiserver status ...
	I0921 22:18:17.865248  270464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:18:17.873894  270464 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:18:18.065385  270464 api_server.go:165] Checking apiserver status ...
	I0921 22:18:18.065491  270464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:18:18.073903  270464 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:18:18.073926  270464 api_server.go:165] Checking apiserver status ...
	I0921 22:18:18.073970  270464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:18:18.082276  270464 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:18:18.082307  270464 kubeadm.go:602] needs reconfigure: apiserver error: timed out waiting for the condition
	I0921 22:18:18.082315  270464 kubeadm.go:1114] stopping kube-system containers ...
	I0921 22:18:18.082329  270464 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0921 22:18:18.082383  270464 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0921 22:18:18.107225  270464 cri.go:87] found id: "3a90bbbf9dee102645e5d858c512cf54c2be058ce005a530ee9d36b11a784e08"
	I0921 22:18:18.107253  270464 cri.go:87] found id: "4b88290d420156bc5be9ec0174239b290253932435968d53816a4841aa62f1ec"
	I0921 22:18:18.107263  270464 cri.go:87] found id: "b495f8c2d6e405409a334bfad5c00ddfc96191b10c91ae5b864c93347ff69477"
	I0921 22:18:18.107274  270464 cri.go:87] found id: "6862142ad52d2991d0eec9bbe9984aad250d6b9511d442016601c23da4aa669f"
	I0921 22:18:18.107284  270464 cri.go:87] found id: "8e3bc0e86297d96513344dfd94f78e4c7fad81839242beb9bd803a31164bf4b6"
	I0921 22:18:18.107300  270464 cri.go:87] found id: "5e610026c9c3e8ed5f60cee9db11564bf155aa0dee2ff53200ae08b8cbeee624"
	I0921 22:18:18.107312  270464 cri.go:87] found id: ""
	I0921 22:18:18.107331  270464 cri.go:232] Stopping containers: [3a90bbbf9dee102645e5d858c512cf54c2be058ce005a530ee9d36b11a784e08 4b88290d420156bc5be9ec0174239b290253932435968d53816a4841aa62f1ec b495f8c2d6e405409a334bfad5c00ddfc96191b10c91ae5b864c93347ff69477 6862142ad52d2991d0eec9bbe9984aad250d6b9511d442016601c23da4aa669f 8e3bc0e86297d96513344dfd94f78e4c7fad81839242beb9bd803a31164bf4b6 5e610026c9c3e8ed5f60cee9db11564bf155aa0dee2ff53200ae08b8cbeee624]
	I0921 22:18:18.107386  270464 ssh_runner.go:195] Run: which crictl
	I0921 22:18:18.110370  270464 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop 3a90bbbf9dee102645e5d858c512cf54c2be058ce005a530ee9d36b11a784e08 4b88290d420156bc5be9ec0174239b290253932435968d53816a4841aa62f1ec b495f8c2d6e405409a334bfad5c00ddfc96191b10c91ae5b864c93347ff69477 6862142ad52d2991d0eec9bbe9984aad250d6b9511d442016601c23da4aa669f 8e3bc0e86297d96513344dfd94f78e4c7fad81839242beb9bd803a31164bf4b6 5e610026c9c3e8ed5f60cee9db11564bf155aa0dee2ff53200ae08b8cbeee624
	I0921 22:18:19.977758  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:18:21.977804  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:18:18.135605  270464 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0921 22:18:18.145590  270464 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0921 22:18:18.152746  270464 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Sep 21 22:17 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Sep 21 22:17 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2063 Sep 21 22:17 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Sep 21 22:17 /etc/kubernetes/scheduler.conf
	
	I0921 22:18:18.152791  270464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0921 22:18:18.159414  270464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0921 22:18:18.165868  270464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0921 22:18:18.172296  270464 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:18:18.172351  270464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0921 22:18:18.178817  270464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0921 22:18:18.185460  270464 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:18:18.185502  270464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0921 22:18:18.191903  270464 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0921 22:18:18.198347  270464 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0921 22:18:18.198366  270464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0921 22:18:18.243873  270464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0921 22:18:18.875476  270464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0921 22:18:19.008429  270464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0921 22:18:19.075900  270464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0921 22:18:19.188641  270464 api_server.go:51] waiting for apiserver process to appear ...
	I0921 22:18:19.188725  270464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 22:18:19.698462  270464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 22:18:20.198832  270464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 22:18:20.276477  270464 api_server.go:71] duration metric: took 1.087835442s to wait for apiserver process to appear ...
	I0921 22:18:20.276512  270464 api_server.go:87] waiting for apiserver healthz status ...
	I0921 22:18:20.276525  270464 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0921 22:18:20.276919  270464 api_server.go:256] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0921 22:18:20.777646  270464 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0921 22:18:23.481306  270464 api_server.go:266] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0921 22:18:23.481335  270464 api_server.go:102] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0921 22:18:23.777732  270464 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0921 22:18:23.783406  270464 api_server.go:266] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0921 22:18:23.783445  270464 api_server.go:102] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0921 22:18:24.277560  270464 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0921 22:18:24.282590  270464 api_server.go:266] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0921 22:18:24.282619  270464 api_server.go:102] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0921 22:18:24.777960  270464 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0921 22:18:24.783842  270464 api_server.go:266] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0921 22:18:24.791207  270464 api_server.go:140] control plane version: v1.25.2
	I0921 22:18:24.791246  270464 api_server.go:130] duration metric: took 4.514726109s to wait for apiserver health ...
	I0921 22:18:24.791260  270464 cni.go:95] Creating CNI manager for ""
	I0921 22:18:24.791271  270464 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0921 22:18:24.794022  270464 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0921 22:18:24.795591  270464 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0921 22:18:24.799816  270464 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.2/kubectl ...
	I0921 22:18:24.799850  270464 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0921 22:18:24.817623  270464 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0921 22:18:25.596243  270464 system_pods.go:43] waiting for kube-system pods to appear ...
	I0921 22:18:25.604969  270464 system_pods.go:59] 9 kube-system pods found
	I0921 22:18:25.605001  270464 system_pods.go:61] "coredns-565d847f94-k9p5n" [9a7e4a83-e11c-4abc-b3f2-ff2fd6a9a44e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0921 22:18:25.605008  270464 system_pods.go:61] "etcd-newest-cni-20220921221720-10174" [d56d7a12-a093-43ea-97b4-e5b3cb66bf02] Running
	I0921 22:18:25.605014  270464 system_pods.go:61] "kindnet-gkz8f" [4f7b5b63-e3f9-41bc-803f-cab9949dfca2] Running
	I0921 22:18:25.605020  270464 system_pods.go:61] "kube-apiserver-newest-cni-20220921221720-10174" [51948604-2d5f-405e-b3a2-26740937866d] Running
	I0921 22:18:25.605032  270464 system_pods.go:61] "kube-controller-manager-newest-cni-20220921221720-10174" [5c390a04-523c-487d-bcd5-928f33ed5b04] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0921 22:18:25.605045  270464 system_pods.go:61] "kube-proxy-47q56" [afb97502-915c-45c2-911b-22d200a8e934] Running
	I0921 22:18:25.605058  270464 system_pods.go:61] "kube-scheduler-newest-cni-20220921221720-10174" [c0e8a08e-f1a5-43e8-a9c3-0f0b36b0abcf] Running
	I0921 22:18:25.605069  270464 system_pods.go:61] "metrics-server-5c8fd5cf8-pb9zk" [92ed4169-eaf5-46ce-88e9-9b3459355c2f] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0921 22:18:25.605079  270464 system_pods.go:61] "storage-provisioner" [5f9c7750-759e-4470-9a9b-b2b0487497b1] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0921 22:18:25.605088  270464 system_pods.go:74] duration metric: took 8.816498ms to wait for pod list to return data ...
	I0921 22:18:25.605099  270464 node_conditions.go:102] verifying NodePressure condition ...
	I0921 22:18:25.607635  270464 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0921 22:18:25.607661  270464 node_conditions.go:123] node cpu capacity is 8
	I0921 22:18:25.607671  270464 node_conditions.go:105] duration metric: took 2.565892ms to run NodePressure ...
	I0921 22:18:25.607685  270464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0921 22:18:25.739086  270464 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0921 22:18:25.745886  270464 ops.go:34] apiserver oom_adj: -16
	I0921 22:18:25.745922  270464 kubeadm.go:631] restartCluster took 10.707068549s
	I0921 22:18:25.745932  270464 kubeadm.go:398] StartCluster complete in 10.754754438s
	I0921 22:18:25.745952  270464 settings.go:142] acquiring lock: {Name:mk2e017b0c75e33bad2b3a546d00214b38a21694 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:18:25.746054  270464 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
	I0921 22:18:25.747383  270464 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig: {Name:mkdec5faf1bb182b50a3cd5458b352f38075dc15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:18:25.750576  270464 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-20220921221720-10174" rescaled to 1
	I0921 22:18:25.750634  270464 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0921 22:18:25.750653  270464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0921 22:18:25.752370  270464 out.go:177] * Verifying Kubernetes components...
	I0921 22:18:25.750722  270464 addons.go:412] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0921 22:18:25.750830  270464 config.go:180] Loaded profile config "newest-cni-20220921221720-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.2
	I0921 22:18:25.753627  270464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0921 22:18:25.753641  270464 addons.go:65] Setting storage-provisioner=true in profile "newest-cni-20220921221720-10174"
	I0921 22:18:25.753647  270464 addons.go:65] Setting default-storageclass=true in profile "newest-cni-20220921221720-10174"
	I0921 22:18:25.753658  270464 addons.go:65] Setting dashboard=true in profile "newest-cni-20220921221720-10174"
	I0921 22:18:25.753668  270464 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-20220921221720-10174"
	I0921 22:18:25.753674  270464 addons.go:65] Setting metrics-server=true in profile "newest-cni-20220921221720-10174"
	I0921 22:18:25.753678  270464 addons.go:153] Setting addon dashboard=true in "newest-cni-20220921221720-10174"
	W0921 22:18:25.753687  270464 addons.go:162] addon dashboard should already be in state true
	I0921 22:18:25.753668  270464 addons.go:153] Setting addon storage-provisioner=true in "newest-cni-20220921221720-10174"
	W0921 22:18:25.753716  270464 addons.go:162] addon storage-provisioner should already be in state true
	I0921 22:18:25.753740  270464 host.go:66] Checking if "newest-cni-20220921221720-10174" exists ...
	I0921 22:18:25.753687  270464 addons.go:153] Setting addon metrics-server=true in "newest-cni-20220921221720-10174"
	W0921 22:18:25.753790  270464 addons.go:162] addon metrics-server should already be in state true
	I0921 22:18:25.753804  270464 host.go:66] Checking if "newest-cni-20220921221720-10174" exists ...
	I0921 22:18:25.753876  270464 host.go:66] Checking if "newest-cni-20220921221720-10174" exists ...
	I0921 22:18:25.754023  270464 cli_runner.go:164] Run: docker container inspect newest-cni-20220921221720-10174 --format={{.State.Status}}
	I0921 22:18:25.754230  270464 cli_runner.go:164] Run: docker container inspect newest-cni-20220921221720-10174 --format={{.State.Status}}
	I0921 22:18:25.754253  270464 cli_runner.go:164] Run: docker container inspect newest-cni-20220921221720-10174 --format={{.State.Status}}
	I0921 22:18:25.754411  270464 cli_runner.go:164] Run: docker container inspect newest-cni-20220921221720-10174 --format={{.State.Status}}
	I0921 22:18:25.791197  270464 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0921 22:18:25.794533  270464 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0921 22:18:25.794551  270464 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0921 22:18:25.795935  270464 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.6.0
	I0921 22:18:25.794597  270464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221720-10174
	I0921 22:18:25.798569  270464 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0921 22:18:25.797404  270464 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0921 22:18:25.804574  270464 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0921 22:18:25.804597  270464 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0921 22:18:25.799820  270464 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0921 22:18:25.804640  270464 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0921 22:18:25.804652  270464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221720-10174
	I0921 22:18:25.804698  270464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221720-10174
	I0921 22:18:25.804486  270464 addons.go:153] Setting addon default-storageclass=true in "newest-cni-20220921221720-10174"
	W0921 22:18:25.804760  270464 addons.go:162] addon default-storageclass should already be in state true
	I0921 22:18:25.804800  270464 host.go:66] Checking if "newest-cni-20220921221720-10174" exists ...
	I0921 22:18:25.805405  270464 cli_runner.go:164] Run: docker container inspect newest-cni-20220921221720-10174 --format={{.State.Status}}
	I0921 22:18:25.838726  270464 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49433 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/newest-cni-20220921221720-10174/id_rsa Username:docker}
	I0921 22:18:25.843025  270464 api_server.go:51] waiting for apiserver process to appear ...
	I0921 22:18:25.843082  270464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 22:18:25.843236  270464 start.go:790] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0921 22:18:25.845316  270464 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49433 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/newest-cni-20220921221720-10174/id_rsa Username:docker}
	I0921 22:18:25.848567  270464 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49433 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/newest-cni-20220921221720-10174/id_rsa Username:docker}
	I0921 22:18:25.849735  270464 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0921 22:18:25.849759  270464 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0921 22:18:25.849810  270464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220921221720-10174
	I0921 22:18:25.853835  270464 api_server.go:71] duration metric: took 103.168578ms to wait for apiserver process to appear ...
	I0921 22:18:25.853863  270464 api_server.go:87] waiting for apiserver healthz status ...
	I0921 22:18:25.853886  270464 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0921 22:18:25.860137  270464 api_server.go:266] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0921 22:18:25.861062  270464 api_server.go:140] control plane version: v1.25.2
	I0921 22:18:25.861081  270464 api_server.go:130] duration metric: took 7.211042ms to wait for apiserver health ...
	I0921 22:18:25.861091  270464 system_pods.go:43] waiting for kube-system pods to appear ...
	I0921 22:18:25.877319  270464 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49433 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/newest-cni-20220921221720-10174/id_rsa Username:docker}
	I0921 22:18:25.879281  270464 system_pods.go:59] 9 kube-system pods found
	I0921 22:18:25.879308  270464 system_pods.go:61] "coredns-565d847f94-k9p5n" [9a7e4a83-e11c-4abc-b3f2-ff2fd6a9a44e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0921 22:18:25.879315  270464 system_pods.go:61] "etcd-newest-cni-20220921221720-10174" [d56d7a12-a093-43ea-97b4-e5b3cb66bf02] Running
	I0921 22:18:25.879320  270464 system_pods.go:61] "kindnet-gkz8f" [4f7b5b63-e3f9-41bc-803f-cab9949dfca2] Running
	I0921 22:18:25.879326  270464 system_pods.go:61] "kube-apiserver-newest-cni-20220921221720-10174" [51948604-2d5f-405e-b3a2-26740937866d] Running
	I0921 22:18:25.879335  270464 system_pods.go:61] "kube-controller-manager-newest-cni-20220921221720-10174" [5c390a04-523c-487d-bcd5-928f33ed5b04] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0921 22:18:25.879345  270464 system_pods.go:61] "kube-proxy-47q56" [afb97502-915c-45c2-911b-22d200a8e934] Running
	I0921 22:18:25.879349  270464 system_pods.go:61] "kube-scheduler-newest-cni-20220921221720-10174" [c0e8a08e-f1a5-43e8-a9c3-0f0b36b0abcf] Running
	I0921 22:18:25.879355  270464 system_pods.go:61] "metrics-server-5c8fd5cf8-pb9zk" [92ed4169-eaf5-46ce-88e9-9b3459355c2f] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0921 22:18:25.879365  270464 system_pods.go:61] "storage-provisioner" [5f9c7750-759e-4470-9a9b-b2b0487497b1] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0921 22:18:25.879371  270464 system_pods.go:74] duration metric: took 18.275303ms to wait for pod list to return data ...
	I0921 22:18:25.879382  270464 default_sa.go:34] waiting for default service account to be created ...
	I0921 22:18:25.881525  270464 default_sa.go:45] found service account: "default"
	I0921 22:18:25.881543  270464 default_sa.go:55] duration metric: took 2.1536ms for default service account to be created ...
	I0921 22:18:25.881551  270464 kubeadm.go:573] duration metric: took 130.888457ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0921 22:18:25.881565  270464 node_conditions.go:102] verifying NodePressure condition ...
	I0921 22:18:25.884097  270464 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0921 22:18:25.884117  270464 node_conditions.go:123] node cpu capacity is 8
	I0921 22:18:25.884126  270464 node_conditions.go:105] duration metric: took 2.556931ms to run NodePressure ...
	I0921 22:18:25.884135  270464 start.go:216] waiting for startup goroutines ...
	I0921 22:18:25.945255  270464 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0921 22:18:25.945285  270464 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0921 22:18:25.949672  270464 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0921 22:18:25.949688  270464 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0921 22:18:25.949709  270464 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0921 22:18:25.959632  270464 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0921 22:18:25.959656  270464 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0921 22:18:25.963888  270464 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0921 22:18:25.963911  270464 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0921 22:18:25.974223  270464 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0921 22:18:25.974250  270464 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0921 22:18:25.978556  270464 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0921 22:18:25.978581  270464 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0921 22:18:25.978847  270464 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0921 22:18:25.995095  270464 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0921 22:18:25.995206  270464 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0921 22:18:25.995231  270464 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4206 bytes)
	I0921 22:18:26.011190  270464 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0921 22:18:26.011243  270464 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0921 22:18:26.085101  270464 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0921 22:18:26.085150  270464 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0921 22:18:26.104077  270464 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0921 22:18:26.104114  270464 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0921 22:18:26.188309  270464 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0921 22:18:26.188339  270464 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0921 22:18:26.205569  270464 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0921 22:18:26.205601  270464 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0921 22:18:26.293701  270464 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0921 22:18:26.601264  270464 addons.go:383] Verifying addon metrics-server=true in "newest-cni-20220921221720-10174"
	I0921 22:18:26.788594  270464 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0921 22:18:26.790190  270464 addons.go:414] enableAddons completed in 1.039462461s
	I0921 22:18:26.840016  270464 start.go:506] kubectl: 1.25.2, cluster: 1.25.2 (minor skew: 0)
	I0921 22:18:26.841805  270464 out.go:177] * Done! kubectl is now configured to use "newest-cni-20220921221720-10174" cluster and "default" namespace by default
	I0921 22:18:24.478339  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:18:26.978336  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:18:29.476882  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:18:31.478356  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:18:33.976605  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:18:35.976878  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:18:38.477434  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:18:40.478033  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:18:42.977925  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:18:45.477571  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:18:47.977615  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:18:49.977882  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:18:52.477206  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:18:54.477844  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:18:56.976782  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:18:58.977180  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:19:01.477897  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:19:03.977685  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:19:06.477200  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:19:08.976722  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:19:10.977610  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:19:13.476627  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:19:15.477519  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:19:17.977105  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:19:19.977626  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:19:22.477263  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:19:24.477637  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:19:26.977251  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:19:29.476688  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:19:31.477764  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:19:33.976991  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:19:35.977033  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:19:37.977441  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:19:40.477326  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:19:42.977381  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:19:44.977553  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:19:47.476449  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:19:49.477452  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:19:51.977396  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:19:54.477645  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:19:56.977376  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:19:59.477995  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:20:01.977425  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:20:03.977726  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:20:06.477894  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:20:08.977434  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:20:10.978166  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:20:13.477792  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:20:15.977558  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:20:18.477544  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:20:20.977066  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:20:22.977332  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:20:25.477402  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:20:27.977246  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:20:29.977370  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:20:32.477012  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:20:34.977231  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:20:36.977848  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:20:39.477873  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:20:41.976860  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:20:43.977103  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:20:45.977754  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:20:48.477044  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:20:50.477628  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:20:52.977796  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:20:55.476953  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:20:57.977440  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:20:59.977475  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:21:02.477190  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:21:04.977117  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:21:07.476968  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:21:09.477119  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:21:11.477503  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:21:13.477643  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:21:15.976936  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:21:17.977762  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	8c81e8e062ce5       d921cee849482       3 minutes ago       Exited              kindnet-cni               3                   842ff71db5ddd
	3eefbcb898b09       1c7d8c51823b5       12 minutes ago      Running             kube-proxy                0                   3a052127f22d7
	6a4b91f0531d1       a8a176a5d5d69       12 minutes ago      Running             etcd                      0                   9756bf60beb90
	a9c3d39d9942f       ca0ea1ee3cfd3       12 minutes ago      Running             kube-scheduler            0                   26094dc69faf0
	b69529a7e224f       97801f8394908       12 minutes ago      Running             kube-apiserver            0                   6c9070db9088c
	b1a22ede66e31       dbfceb93c69b6       12 minutes ago      Running             kube-controller-manager   0                   649c092f0b0ca
	
	* 
	* ==> containerd <==
	* -- Logs begin at Wed 2022-09-21 22:08:33 UTC, end at Wed 2022-09-21 22:21:18 UTC. --
	Sep 21 22:14:38 no-preload-20220921220832-10174 containerd[512]: time="2022-09-21T22:14:38.125930714Z" level=warning msg="cleaning up after shim disconnected" id=4319d6b9051970d5c21e870c71eeb9d7c765b4c15cf0b862381f53977a9cc221 namespace=k8s.io
	Sep 21 22:14:38 no-preload-20220921220832-10174 containerd[512]: time="2022-09-21T22:14:38.125942180Z" level=info msg="cleaning up dead shim"
	Sep 21 22:14:38 no-preload-20220921220832-10174 containerd[512]: time="2022-09-21T22:14:38.137353933Z" level=warning msg="cleanup warnings time=\"2022-09-21T22:14:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2923 runtime=io.containerd.runc.v2\n"
	Sep 21 22:14:38 no-preload-20220921220832-10174 containerd[512]: time="2022-09-21T22:14:38.801705532Z" level=info msg="RemoveContainer for \"9b4bcc68b201c6e0c9847ec783771a1871359f04fea4ce921778c106f7361939\""
	Sep 21 22:14:38 no-preload-20220921220832-10174 containerd[512]: time="2022-09-21T22:14:38.808495036Z" level=info msg="RemoveContainer for \"9b4bcc68b201c6e0c9847ec783771a1871359f04fea4ce921778c106f7361939\" returns successfully"
	Sep 21 22:14:50 no-preload-20220921220832-10174 containerd[512]: time="2022-09-21T22:14:50.113627373Z" level=info msg="CreateContainer within sandbox \"842ff71db5ddd12bfafd846824f000dbe00a410f568767bbd1c8fb5bdb20f51e\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:2,}"
	Sep 21 22:14:50 no-preload-20220921220832-10174 containerd[512]: time="2022-09-21T22:14:50.128050108Z" level=info msg="CreateContainer within sandbox \"842ff71db5ddd12bfafd846824f000dbe00a410f568767bbd1c8fb5bdb20f51e\" for &ContainerMetadata{Name:kindnet-cni,Attempt:2,} returns container id \"543a5436a22f4e9205866f688547d1b2d796db21aeedbb33be386e241f16bda1\""
	Sep 21 22:14:50 no-preload-20220921220832-10174 containerd[512]: time="2022-09-21T22:14:50.128661254Z" level=info msg="StartContainer for \"543a5436a22f4e9205866f688547d1b2d796db21aeedbb33be386e241f16bda1\""
	Sep 21 22:14:50 no-preload-20220921220832-10174 containerd[512]: time="2022-09-21T22:14:50.195171816Z" level=info msg="StartContainer for \"543a5436a22f4e9205866f688547d1b2d796db21aeedbb33be386e241f16bda1\" returns successfully"
	Sep 21 22:17:30 no-preload-20220921220832-10174 containerd[512]: time="2022-09-21T22:17:30.728316246Z" level=info msg="shim disconnected" id=543a5436a22f4e9205866f688547d1b2d796db21aeedbb33be386e241f16bda1
	Sep 21 22:17:30 no-preload-20220921220832-10174 containerd[512]: time="2022-09-21T22:17:30.728384023Z" level=warning msg="cleaning up after shim disconnected" id=543a5436a22f4e9205866f688547d1b2d796db21aeedbb33be386e241f16bda1 namespace=k8s.io
	Sep 21 22:17:30 no-preload-20220921220832-10174 containerd[512]: time="2022-09-21T22:17:30.728394963Z" level=info msg="cleaning up dead shim"
	Sep 21 22:17:30 no-preload-20220921220832-10174 containerd[512]: time="2022-09-21T22:17:30.738340815Z" level=warning msg="cleanup warnings time=\"2022-09-21T22:17:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3040 runtime=io.containerd.runc.v2\n"
	Sep 21 22:17:31 no-preload-20220921220832-10174 containerd[512]: time="2022-09-21T22:17:31.124831173Z" level=info msg="RemoveContainer for \"4319d6b9051970d5c21e870c71eeb9d7c765b4c15cf0b862381f53977a9cc221\""
	Sep 21 22:17:31 no-preload-20220921220832-10174 containerd[512]: time="2022-09-21T22:17:31.130281615Z" level=info msg="RemoveContainer for \"4319d6b9051970d5c21e870c71eeb9d7c765b4c15cf0b862381f53977a9cc221\" returns successfully"
	Sep 21 22:17:54 no-preload-20220921220832-10174 containerd[512]: time="2022-09-21T22:17:54.114835667Z" level=info msg="CreateContainer within sandbox \"842ff71db5ddd12bfafd846824f000dbe00a410f568767bbd1c8fb5bdb20f51e\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:3,}"
	Sep 21 22:17:54 no-preload-20220921220832-10174 containerd[512]: time="2022-09-21T22:17:54.127471150Z" level=info msg="CreateContainer within sandbox \"842ff71db5ddd12bfafd846824f000dbe00a410f568767bbd1c8fb5bdb20f51e\" for &ContainerMetadata{Name:kindnet-cni,Attempt:3,} returns container id \"8c81e8e062ce578d93aaa7742d6ea9313a45d09b7520be04efd4ddd67bd4734b\""
	Sep 21 22:17:54 no-preload-20220921220832-10174 containerd[512]: time="2022-09-21T22:17:54.128003409Z" level=info msg="StartContainer for \"8c81e8e062ce578d93aaa7742d6ea9313a45d09b7520be04efd4ddd67bd4734b\""
	Sep 21 22:17:54 no-preload-20220921220832-10174 containerd[512]: time="2022-09-21T22:17:54.203756780Z" level=info msg="StartContainer for \"8c81e8e062ce578d93aaa7742d6ea9313a45d09b7520be04efd4ddd67bd4734b\" returns successfully"
	Sep 21 22:20:34 no-preload-20220921220832-10174 containerd[512]: time="2022-09-21T22:20:34.725730306Z" level=info msg="shim disconnected" id=8c81e8e062ce578d93aaa7742d6ea9313a45d09b7520be04efd4ddd67bd4734b
	Sep 21 22:20:34 no-preload-20220921220832-10174 containerd[512]: time="2022-09-21T22:20:34.725808516Z" level=warning msg="cleaning up after shim disconnected" id=8c81e8e062ce578d93aaa7742d6ea9313a45d09b7520be04efd4ddd67bd4734b namespace=k8s.io
	Sep 21 22:20:34 no-preload-20220921220832-10174 containerd[512]: time="2022-09-21T22:20:34.725820081Z" level=info msg="cleaning up dead shim"
	Sep 21 22:20:34 no-preload-20220921220832-10174 containerd[512]: time="2022-09-21T22:20:34.735680498Z" level=warning msg="cleanup warnings time=\"2022-09-21T22:20:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3152 runtime=io.containerd.runc.v2\n"
	Sep 21 22:20:35 no-preload-20220921220832-10174 containerd[512]: time="2022-09-21T22:20:35.469852172Z" level=info msg="RemoveContainer for \"543a5436a22f4e9205866f688547d1b2d796db21aeedbb33be386e241f16bda1\""
	Sep 21 22:20:35 no-preload-20220921220832-10174 containerd[512]: time="2022-09-21T22:20:35.475973253Z" level=info msg="RemoveContainer for \"543a5436a22f4e9205866f688547d1b2d796db21aeedbb33be386e241f16bda1\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-20220921220832-10174
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-20220921220832-10174
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=937c68716dfaac5b5ffa3b6655158d5d3472b8c4
	                    minikube.k8s.io/name=no-preload-20220921220832-10174
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_09_21T22_08_58_0700
	                    minikube.k8s.io/version=v1.27.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 21 Sep 2022 22:08:55 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-20220921220832-10174
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 21 Sep 2022 22:21:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 21 Sep 2022 22:19:40 +0000   Wed, 21 Sep 2022 22:08:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 21 Sep 2022 22:19:40 +0000   Wed, 21 Sep 2022 22:08:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 21 Sep 2022 22:19:40 +0000   Wed, 21 Sep 2022 22:08:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 21 Sep 2022 22:19:40 +0000   Wed, 21 Sep 2022 22:08:52 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-20220921220832-10174
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871748Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871748Ki
	  pods:               110
	System Info:
	  Machine ID:                 40026c506a9948afaae3b0fda0a61c83
	  System UUID:                44c6c62a-5061-4f07-a2f0-9d563da1b73e
	  Boot ID:                    878f3e16-9143-42f3-b848-1a740c7f26bf
	  Kernel Version:             5.15.0-1017-gcp
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.6.8
	  Kubelet Version:            v1.25.2
	  Kube-Proxy Version:         v1.25.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-no-preload-20220921220832-10174                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-27cj5                                              100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      12m
	  kube-system                 kube-apiserver-no-preload-20220921220832-10174             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-no-preload-20220921220832-10174    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-nxpf5                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-no-preload-20220921220832-10174             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  NodeHasSufficientMemory  12m (x5 over 12m)  kubelet          Node no-preload-20220921220832-10174 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x5 over 12m)  kubelet          Node no-preload-20220921220832-10174 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x3 over 12m)  kubelet          Node no-preload-20220921220832-10174 status is now: NodeHasSufficientPID
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node no-preload-20220921220832-10174 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node no-preload-20220921220832-10174 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet          Node no-preload-20220921220832-10174 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           12m                node-controller  Node no-preload-20220921220832-10174 event: Registered Node no-preload-20220921220832-10174 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000005] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +2.959863] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.003881] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.023897] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[Sep21 22:10] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.005087] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[Sep21 22:11] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +2.967845] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.031851] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.027935] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +2.943864] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.019893] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.023889] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	
	* 
	* ==> etcd [6a4b91f0531d15b584576f58078373b62517efb45934f9088111d572c5de1646] <==
	* {"level":"info","ts":"2022-09-21T22:09:12.389Z","caller":"traceutil/trace.go:171","msg":"trace[281568925] transaction","detail":"{read_only:false; response_revision:335; number_of_response:1; }","duration":"276.956319ms","start":"2022-09-21T22:09:12.112Z","end":"2022-09-21T22:09:12.389Z","steps":["trace[281568925] 'process raft request'  (duration: 276.78745ms)"],"step_count":1}
	{"level":"info","ts":"2022-09-21T22:09:12.390Z","caller":"traceutil/trace.go:171","msg":"trace[34397509] linearizableReadLoop","detail":"{readStateIndex:349; appliedIndex:344; }","duration":"274.143178ms","start":"2022-09-21T22:09:12.115Z","end":"2022-09-21T22:09:12.390Z","steps":["trace[34397509] 'read index received'  (duration: 184.947899ms)","trace[34397509] 'applied index is now lower than readState.Index'  (duration: 89.194459ms)"],"step_count":2}
	{"level":"info","ts":"2022-09-21T22:09:12.390Z","caller":"traceutil/trace.go:171","msg":"trace[86480274] transaction","detail":"{read_only:false; response_revision:338; number_of_response:1; }","duration":"267.792108ms","start":"2022-09-21T22:09:12.122Z","end":"2022-09-21T22:09:12.390Z","steps":["trace[86480274] 'process raft request'  (duration: 267.534906ms)"],"step_count":1}
	{"level":"info","ts":"2022-09-21T22:09:12.390Z","caller":"traceutil/trace.go:171","msg":"trace[1151117429] transaction","detail":"{read_only:false; response_revision:337; number_of_response:1; }","duration":"270.92859ms","start":"2022-09-21T22:09:12.119Z","end":"2022-09-21T22:09:12.390Z","steps":["trace[1151117429] 'process raft request'  (duration: 270.611323ms)"],"step_count":1}
	{"level":"warn","ts":"2022-09-21T22:09:12.390Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"274.275236ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/kube-system/kube-proxy\" ","response":"range_response_count:1 size:2878"}
	{"level":"info","ts":"2022-09-21T22:09:12.390Z","caller":"traceutil/trace.go:171","msg":"trace[490030946] range","detail":"{range_begin:/registry/daemonsets/kube-system/kube-proxy; range_end:; response_count:1; response_revision:338; }","duration":"274.326664ms","start":"2022-09-21T22:09:12.115Z","end":"2022-09-21T22:09:12.390Z","steps":["trace[490030946] 'agreement among raft nodes before linearized reading'  (duration: 274.229021ms)"],"step_count":1}
	{"level":"warn","ts":"2022-09-21T22:09:12.398Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"281.551371ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/kube-system/kindnet\" ","response":"range_response_count:1 size:4601"}
	{"level":"info","ts":"2022-09-21T22:09:12.398Z","caller":"traceutil/trace.go:171","msg":"trace[21787300] range","detail":"{range_begin:/registry/daemonsets/kube-system/kindnet; range_end:; response_count:1; response_revision:338; }","duration":"281.619065ms","start":"2022-09-21T22:09:12.116Z","end":"2022-09-21T22:09:12.398Z","steps":["trace[21787300] 'agreement among raft nodes before linearized reading'  (duration: 281.515501ms)"],"step_count":1}
	{"level":"info","ts":"2022-09-21T22:09:12.516Z","caller":"traceutil/trace.go:171","msg":"trace[1947667569] linearizableReadLoop","detail":"{readStateIndex:355; appliedIndex:355; }","duration":"118.331899ms","start":"2022-09-21T22:09:12.397Z","end":"2022-09-21T22:09:12.516Z","steps":["trace[1947667569] 'read index received'  (duration: 118.31338ms)","trace[1947667569] 'applied index is now lower than readState.Index'  (duration: 15.637µs)"],"step_count":2}
	{"level":"warn","ts":"2022-09-21T22:09:12.518Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"125.444129ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kindnet-27cj5\" ","response":"range_response_count:1 size:3714"}
	{"level":"info","ts":"2022-09-21T22:09:12.518Z","caller":"traceutil/trace.go:171","msg":"trace[877726603] range","detail":"{range_begin:/registry/pods/kube-system/kindnet-27cj5; range_end:; response_count:1; response_revision:342; }","duration":"125.520607ms","start":"2022-09-21T22:09:12.392Z","end":"2022-09-21T22:09:12.518Z","steps":["trace[877726603] 'agreement among raft nodes before linearized reading'  (duration: 123.409684ms)"],"step_count":1}
	{"level":"warn","ts":"2022-09-21T22:09:12.518Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"124.770564ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/kube-system/kube-proxy\" ","response":"range_response_count:1 size:2878"}
	{"level":"info","ts":"2022-09-21T22:09:12.518Z","caller":"traceutil/trace.go:171","msg":"trace[1569897163] range","detail":"{range_begin:/registry/daemonsets/kube-system/kube-proxy; range_end:; response_count:1; response_revision:342; }","duration":"124.848752ms","start":"2022-09-21T22:09:12.393Z","end":"2022-09-21T22:09:12.518Z","steps":["trace[1569897163] 'agreement among raft nodes before linearized reading'  (duration: 122.552703ms)"],"step_count":1}
	{"level":"info","ts":"2022-09-21T22:09:12.518Z","caller":"traceutil/trace.go:171","msg":"trace[584162130] transaction","detail":"{read_only:false; response_revision:345; number_of_response:1; }","duration":"116.491437ms","start":"2022-09-21T22:09:12.402Z","end":"2022-09-21T22:09:12.518Z","steps":["trace[584162130] 'process raft request'  (duration: 116.413393ms)"],"step_count":1}
	{"level":"info","ts":"2022-09-21T22:09:12.518Z","caller":"traceutil/trace.go:171","msg":"trace[1358578711] transaction","detail":"{read_only:false; response_revision:343; number_of_response:1; }","duration":"118.162741ms","start":"2022-09-21T22:09:12.400Z","end":"2022-09-21T22:09:12.518Z","steps":["trace[1358578711] 'process raft request'  (duration: 115.718872ms)"],"step_count":1}
	{"level":"warn","ts":"2022-09-21T22:09:12.518Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"119.950399ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kindnet-27cj5\" ","response":"range_response_count:1 size:3714"}
	{"level":"info","ts":"2022-09-21T22:09:12.518Z","caller":"traceutil/trace.go:171","msg":"trace[1353864366] range","detail":"{range_begin:/registry/pods/kube-system/kindnet-27cj5; range_end:; response_count:1; response_revision:346; }","duration":"119.996483ms","start":"2022-09-21T22:09:12.398Z","end":"2022-09-21T22:09:12.518Z","steps":["trace[1353864366] 'agreement among raft nodes before linearized reading'  (duration: 119.913158ms)"],"step_count":1}
	{"level":"info","ts":"2022-09-21T22:09:12.518Z","caller":"traceutil/trace.go:171","msg":"trace[677856182] transaction","detail":"{read_only:false; response_revision:346; number_of_response:1; }","duration":"116.412604ms","start":"2022-09-21T22:09:12.402Z","end":"2022-09-21T22:09:12.518Z","steps":["trace[677856182] 'process raft request'  (duration: 116.313434ms)"],"step_count":1}
	{"level":"warn","ts":"2022-09-21T22:09:12.518Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"116.884809ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/kube-system/kindnet\" ","response":"range_response_count:1 size:4601"}
	{"level":"info","ts":"2022-09-21T22:09:12.518Z","caller":"traceutil/trace.go:171","msg":"trace[39557479] transaction","detail":"{read_only:false; response_revision:344; number_of_response:1; }","duration":"117.099753ms","start":"2022-09-21T22:09:12.401Z","end":"2022-09-21T22:09:12.518Z","steps":["trace[39557479] 'process raft request'  (duration: 116.94402ms)"],"step_count":1}
	{"level":"info","ts":"2022-09-21T22:09:12.518Z","caller":"traceutil/trace.go:171","msg":"trace[1861985295] range","detail":"{range_begin:/registry/daemonsets/kube-system/kindnet; range_end:; response_count:1; response_revision:346; }","duration":"116.918724ms","start":"2022-09-21T22:09:12.401Z","end":"2022-09-21T22:09:12.518Z","steps":["trace[1861985295] 'agreement among raft nodes before linearized reading'  (duration: 116.853502ms)"],"step_count":1}
	{"level":"warn","ts":"2022-09-21T22:09:12.518Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"120.054969ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-nxpf5\" ","response":"range_response_count:1 size:4456"}
	{"level":"info","ts":"2022-09-21T22:09:12.518Z","caller":"traceutil/trace.go:171","msg":"trace[1034205839] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-nxpf5; range_end:; response_count:1; response_revision:346; }","duration":"120.090347ms","start":"2022-09-21T22:09:12.398Z","end":"2022-09-21T22:09:12.518Z","steps":["trace[1034205839] 'agreement among raft nodes before linearized reading'  (duration: 120.027308ms)"],"step_count":1}
	{"level":"info","ts":"2022-09-21T22:18:52.490Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":452}
	{"level":"info","ts":"2022-09-21T22:18:52.490Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":452,"took":"357.608µs"}
	
	* 
	* ==> kernel <==
	*  22:21:18 up  1:03,  0 users,  load average: 0.49, 1.11, 1.79
	Linux no-preload-20220921220832-10174 5.15.0-1017-gcp #23~20.04.2-Ubuntu SMP Wed Aug 17 02:46:40 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kube-apiserver [b69529a7e224f2addaf76a8ffcfd12b9dfc7f85844417e58938c8a2a1dc26be0] <==
	* I0921 22:08:55.091280       1 controller.go:616] quota admission added evaluator for: namespaces
	I0921 22:08:55.162042       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0921 22:08:55.175656       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0921 22:08:55.175803       1 cache.go:39] Caches are synced for autoregister controller
	I0921 22:08:55.175922       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0921 22:08:55.176029       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0921 22:08:55.176068       1 apf_controller.go:305] Running API Priority and Fairness config worker
	I0921 22:08:55.176224       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0921 22:08:55.756064       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0921 22:08:55.965619       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0921 22:08:55.968635       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0921 22:08:55.968658       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0921 22:08:56.332396       1 controller.go:616] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0921 22:08:56.376266       1 controller.go:616] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0921 22:08:56.506135       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0921 22:08:56.511625       1 lease.go:250] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I0921 22:08:56.512520       1 controller.go:616] quota admission added evaluator for: endpoints
	I0921 22:08:56.515761       1 controller.go:616] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0921 22:08:57.004401       1 controller.go:616] quota admission added evaluator for: serviceaccounts
	I0921 22:08:57.961688       1 controller.go:616] quota admission added evaluator for: deployments.apps
	I0921 22:08:57.968176       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0921 22:08:57.975887       1 controller.go:616] quota admission added evaluator for: daemonsets.apps
	I0921 22:08:58.080449       1 controller.go:616] quota admission added evaluator for: leases.coordination.k8s.io
	I0921 22:09:11.440196       1 controller.go:616] quota admission added evaluator for: controllerrevisions.apps
	I0921 22:09:11.442903       1 controller.go:616] quota admission added evaluator for: replicasets.apps
	
	* 
	* ==> kube-controller-manager [b1a22ede66e3113827144deb16ed1bddbe2e6b60af54e01396e6daf924a4c409] <==
	* I0921 22:09:10.783384       1 taint_manager.go:204] "Starting NoExecuteTaintManager"
	W0921 22:09:10.783442       1 node_lifecycle_controller.go:1058] Missing timestamp for Node no-preload-20220921220832-10174. Assuming now as a timestamp.
	I0921 22:09:10.783454       1 taint_manager.go:209] "Sending events to api server"
	I0921 22:09:10.783502       1 event.go:294] "Event occurred" object="no-preload-20220921220832-10174" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node no-preload-20220921220832-10174 event: Registered Node no-preload-20220921220832-10174 in Controller"
	I0921 22:09:10.783512       1 node_lifecycle_controller.go:1209] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0921 22:09:10.827001       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I0921 22:09:10.840033       1 range_allocator.go:367] Set node no-preload-20220921220832-10174 PodCIDR to [10.244.0.0/24]
	I0921 22:09:10.840385       1 shared_informer.go:262] Caches are synced for resource quota
	I0921 22:09:10.865038       1 shared_informer.go:262] Caches are synced for resource quota
	I0921 22:09:10.880243       1 shared_informer.go:262] Caches are synced for crt configmap
	I0921 22:09:10.932197       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
	I0921 22:09:10.932227       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
	I0921 22:09:10.932229       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I0921 22:09:10.932439       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0921 22:09:10.932555       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I0921 22:09:11.261700       1 shared_informer.go:262] Caches are synced for garbage collector
	I0921 22:09:11.282960       1 shared_informer.go:262] Caches are synced for garbage collector
	I0921 22:09:11.282991       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0921 22:09:11.720390       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-565d847f94 to 2"
	I0921 22:09:11.808535       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-nxpf5"
	I0921 22:09:11.808562       1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-27cj5"
	I0921 22:09:12.391915       1 event.go:294] "Event occurred" object="kube-system/coredns-565d847f94" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-565d847f94-9864v"
	I0921 22:09:12.399398       1 event.go:294] "Event occurred" object="kube-system/coredns-565d847f94" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-565d847f94-m8xgt"
	I0921 22:09:12.734086       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-565d847f94 to 1 from 2"
	I0921 22:09:12.739609       1 event.go:294] "Event occurred" object="kube-system/coredns-565d847f94" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-565d847f94-9864v"
	
	* 
	* ==> kube-proxy [3eefbcb898b092a06a25d126ffaed982346f2b3691953b1cfaddc748fee80843] <==
	* I0921 22:09:13.034514       1 node.go:163] Successfully retrieved node IP: 192.168.94.2
	I0921 22:09:13.034595       1 server_others.go:138] "Detected node IP" address="192.168.94.2"
	I0921 22:09:13.034633       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0921 22:09:13.054326       1 server_others.go:206] "Using iptables Proxier"
	I0921 22:09:13.054377       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0921 22:09:13.054390       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0921 22:09:13.054418       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0921 22:09:13.054463       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0921 22:09:13.054692       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0921 22:09:13.055025       1 server.go:661] "Version info" version="v1.25.2"
	I0921 22:09:13.055049       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0921 22:09:13.055668       1 config.go:226] "Starting endpoint slice config controller"
	I0921 22:09:13.055697       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0921 22:09:13.055773       1 config.go:444] "Starting node config controller"
	I0921 22:09:13.055782       1 config.go:317] "Starting service config controller"
	I0921 22:09:13.055817       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0921 22:09:13.055807       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0921 22:09:13.156647       1 shared_informer.go:262] Caches are synced for node config
	I0921 22:09:13.156676       1 shared_informer.go:262] Caches are synced for service config
	I0921 22:09:13.156693       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [a9c3d39d9942f75aca084150dc32f191aed558a4eca281497df9885089a3cacb] <==
	* W0921 22:08:55.106405       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0921 22:08:55.106631       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0921 22:08:55.106455       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0921 22:08:55.106648       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0921 22:08:55.106452       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0921 22:08:55.106664       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0921 22:08:55.106822       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0921 22:08:55.106832       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0921 22:08:55.106813       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0921 22:08:55.106848       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0921 22:08:55.106851       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0921 22:08:55.106908       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0921 22:08:55.106964       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0921 22:08:55.106987       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0921 22:08:55.107358       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0921 22:08:55.107467       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0921 22:08:55.937169       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0921 22:08:55.937212       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0921 22:08:56.007668       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0921 22:08:56.007706       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0921 22:08:56.040851       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0921 22:08:56.040885       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0921 22:08:56.152475       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0921 22:08:56.152518       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0921 22:08:56.597365       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2022-09-21 22:08:33 UTC, end at Wed 2022-09-21 22:21:19 UTC. --
	Sep 21 22:20:03 no-preload-20220921220832-10174 kubelet[1740]: E0921 22:20:03.454577    1740 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:20:08 no-preload-20220921220832-10174 kubelet[1740]: E0921 22:20:08.455920    1740 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:20:13 no-preload-20220921220832-10174 kubelet[1740]: E0921 22:20:13.456944    1740 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:20:18 no-preload-20220921220832-10174 kubelet[1740]: E0921 22:20:18.458076    1740 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:20:23 no-preload-20220921220832-10174 kubelet[1740]: E0921 22:20:23.459611    1740 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:20:28 no-preload-20220921220832-10174 kubelet[1740]: E0921 22:20:28.460534    1740 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:20:33 no-preload-20220921220832-10174 kubelet[1740]: E0921 22:20:33.461879    1740 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:20:35 no-preload-20220921220832-10174 kubelet[1740]: I0921 22:20:35.468570    1740 scope.go:115] "RemoveContainer" containerID="543a5436a22f4e9205866f688547d1b2d796db21aeedbb33be386e241f16bda1"
	Sep 21 22:20:35 no-preload-20220921220832-10174 kubelet[1740]: I0921 22:20:35.468863    1740 scope.go:115] "RemoveContainer" containerID="8c81e8e062ce578d93aaa7742d6ea9313a45d09b7520be04efd4ddd67bd4734b"
	Sep 21 22:20:35 no-preload-20220921220832-10174 kubelet[1740]: E0921 22:20:35.469235    1740 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-27cj5_kube-system(90383218-a547-458a-8b5e-af84c9d2b017)\"" pod="kube-system/kindnet-27cj5" podUID=90383218-a547-458a-8b5e-af84c9d2b017
	Sep 21 22:20:38 no-preload-20220921220832-10174 kubelet[1740]: E0921 22:20:38.463058    1740 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:20:43 no-preload-20220921220832-10174 kubelet[1740]: E0921 22:20:43.464499    1740 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:20:48 no-preload-20220921220832-10174 kubelet[1740]: E0921 22:20:48.466201    1740 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:20:49 no-preload-20220921220832-10174 kubelet[1740]: I0921 22:20:49.111033    1740 scope.go:115] "RemoveContainer" containerID="8c81e8e062ce578d93aaa7742d6ea9313a45d09b7520be04efd4ddd67bd4734b"
	Sep 21 22:20:49 no-preload-20220921220832-10174 kubelet[1740]: E0921 22:20:49.111304    1740 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-27cj5_kube-system(90383218-a547-458a-8b5e-af84c9d2b017)\"" pod="kube-system/kindnet-27cj5" podUID=90383218-a547-458a-8b5e-af84c9d2b017
	Sep 21 22:20:53 no-preload-20220921220832-10174 kubelet[1740]: E0921 22:20:53.467967    1740 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:20:58 no-preload-20220921220832-10174 kubelet[1740]: E0921 22:20:58.469381    1740 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:21:03 no-preload-20220921220832-10174 kubelet[1740]: I0921 22:21:03.112041    1740 scope.go:115] "RemoveContainer" containerID="8c81e8e062ce578d93aaa7742d6ea9313a45d09b7520be04efd4ddd67bd4734b"
	Sep 21 22:21:03 no-preload-20220921220832-10174 kubelet[1740]: E0921 22:21:03.112331    1740 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-27cj5_kube-system(90383218-a547-458a-8b5e-af84c9d2b017)\"" pod="kube-system/kindnet-27cj5" podUID=90383218-a547-458a-8b5e-af84c9d2b017
	Sep 21 22:21:03 no-preload-20220921220832-10174 kubelet[1740]: E0921 22:21:03.471063    1740 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:21:08 no-preload-20220921220832-10174 kubelet[1740]: E0921 22:21:08.472467    1740 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:21:13 no-preload-20220921220832-10174 kubelet[1740]: E0921 22:21:13.474032    1740 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:21:14 no-preload-20220921220832-10174 kubelet[1740]: I0921 22:21:14.111497    1740 scope.go:115] "RemoveContainer" containerID="8c81e8e062ce578d93aaa7742d6ea9313a45d09b7520be04efd4ddd67bd4734b"
	Sep 21 22:21:14 no-preload-20220921220832-10174 kubelet[1740]: E0921 22:21:14.111812    1740 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-27cj5_kube-system(90383218-a547-458a-8b5e-af84c9d2b017)\"" pod="kube-system/kindnet-27cj5" podUID=90383218-a547-458a-8b5e-af84c9d2b017
	Sep 21 22:21:18 no-preload-20220921220832-10174 kubelet[1740]: E0921 22:21:18.475564    1740 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20220921220832-10174 -n no-preload-20220921220832-10174
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-20220921220832-10174 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: busybox coredns-565d847f94-m8xgt storage-provisioner
helpers_test.go:272: ======> post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context no-preload-20220921220832-10174 describe pod busybox coredns-565d847f94-m8xgt storage-provisioner
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context no-preload-20220921220832-10174 describe pod busybox coredns-565d847f94-m8xgt storage-provisioner: exit status 1 (65.662017ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8284w (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-8284w:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                   From               Message
	  ----     ------            ----                  ----               -------
	  Warning  FailedScheduling  2m51s (x2 over 8m4s)  default-scheduler  0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "coredns-565d847f94-m8xgt" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context no-preload-20220921220832-10174 describe pod busybox coredns-565d847f94-m8xgt storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (484.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/DeployApp (484.49s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-different-port-20220921221118-10174 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [38d822e3-2ac4-43a7-ae7a-4f3f7853b1fc] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
E0921 22:16:02.147457   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/cilium-20220921215524-10174/client.crt: no such file or directory
E0921 22:16:20.828189   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/enable-default-cni-20220921215523-10174/client.crt: no such file or directory
E0921 22:16:38.505316   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/ingress-addon-legacy-20220921213512-10174/client.crt: no such file or directory
E0921 22:16:59.249987   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/bridge-20220921215523-10174/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:196: ***** TestStartStop/group/default-k8s-different-port/serial/DeployApp: pod "integration-test=busybox" failed to start within 8m0s: timed out waiting for the condition ****
start_stop_delete_test.go:196: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220921221118-10174 -n default-k8s-different-port-20220921221118-10174
start_stop_delete_test.go:196: TestStartStop/group/default-k8s-different-port/serial/DeployApp: showing logs for failed pods as of 2022-09-21 22:23:55.154801752 +0000 UTC m=+3403.293253760
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-different-port-20220921221118-10174 describe po busybox -n default
start_stop_delete_test.go:196: (dbg) kubectl --context default-k8s-different-port-20220921221118-10174 describe po busybox -n default:
Name:             busybox
Namespace:        default
Priority:         0
Service Account:  default
Node:             <none>
Labels:           integration-test=busybox
Annotations:      <none>
Status:           Pending
IP:               
IPs:              <none>
Containers:
busybox:
Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
Port:       <none>
Host Port:  <none>
Command:
sleep
3600
Environment:  <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wppsb (ro)
Conditions:
Type           Status
PodScheduled   False 
Volumes:
kube-api-access-wppsb:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age                   From               Message
----     ------            ----                  ----               -------
Warning  FailedScheduling  2m46s (x2 over 8m1s)  default-scheduler  0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-different-port-20220921221118-10174 logs busybox -n default
start_stop_delete_test.go:196: (dbg) kubectl --context default-k8s-different-port-20220921221118-10174 logs busybox -n default:
start_stop_delete_test.go:196: wait: integration-test=busybox within 8m0s: timed out waiting for the condition
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220921221118-10174
helpers_test.go:235: (dbg) docker inspect default-k8s-different-port-20220921221118-10174:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "37728b19138a640b956531d5c576658820e978bb8c95c5dc17cd7f348fdb8112",
	        "Created": "2022-09-21T22:11:25.759772693Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 251802,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-09-21T22:11:26.140466749Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f58fddaff4349397c9f51a6b73926a9b118af22b4ccb4492e84c74d0b59dcd4",
	        "ResolvConfPath": "/var/lib/docker/containers/37728b19138a640b956531d5c576658820e978bb8c95c5dc17cd7f348fdb8112/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/37728b19138a640b956531d5c576658820e978bb8c95c5dc17cd7f348fdb8112/hostname",
	        "HostsPath": "/var/lib/docker/containers/37728b19138a640b956531d5c576658820e978bb8c95c5dc17cd7f348fdb8112/hosts",
	        "LogPath": "/var/lib/docker/containers/37728b19138a640b956531d5c576658820e978bb8c95c5dc17cd7f348fdb8112/37728b19138a640b956531d5c576658820e978bb8c95c5dc17cd7f348fdb8112-json.log",
	        "Name": "/default-k8s-different-port-20220921221118-10174",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-different-port-20220921221118-10174:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-different-port-20220921221118-10174",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a8a0164abb3fd782df784bf984b10c980312d3270f36e5612924a87806ba5a19-init/diff:/var/lib/docker/overlay2/e464f9a636cb97832b8aa5685163afb26b14d037782877c3831e0f261b6fb12b/diff:/var/lib/docker/overlay2/6f2f230003f0711dfcff3390931e14f9abd39be1cd2b674079cb6a0fc99dab7f/diff:/var/lib/docker/overlay2/a539506727a1471a83b57694b6923a137699d9d47fe00601ffa27c34d63d17c2/diff:/var/lib/docker/overlay2/7dc48fda616494e9959c05191521a1f5bdf6be0555a35fc1d7ce85b87a0522ad/diff:/var/lib/docker/overlay2/6a1c06da85f8c03c9712693ca8a75d804e7b9f135a6b92e3929385cc791b0b3a/diff:/var/lib/docker/overlay2/d178ee6cf0c027c7943422dc71b38eb93ee131daede423f6cf283424eaebc517/diff:/var/lib/docker/overlay2/f4c8d81a89934a29db99d9ed4b4542a8882465ab2b6419ba4a3fe09214d25d9e/diff:/var/lib/docker/overlay2/4411575e37772974371f716b7c4abb3053138ddcaf53d883eb3116b4c3a37f78/diff:/var/lib/docker/overlay2/a3f8db0c0d790efcede2f41e470d293e280b022ff0bbb46c8bc25f190e3146fc/diff:/var/lib/docker/overlay2/a5bfd3
703378d6d5b2622598a13b4b2eb9203659c1ffff9aa93944ef8d20d22c/diff:/var/lib/docker/overlay2/3c24c9c8a37c1a488d12209df991b3857ccbc448a3329e46a8d48fc28ef81b83/diff:/var/lib/docker/overlay2/199771963b33bc809e912a6d428c556897f0ba04db2268e08695cb7ce0ee3bad/diff:/var/lib/docker/overlay2/4bc13570e456a5d261fbaf68140f2e5b2ea10c6683623858e16ee3ed1b117ef7/diff:/var/lib/docker/overlay2/9831fdfd44eff7e3f4780d10f5480f090b388735bb188e7a1e93251b7769d8f3/diff:/var/lib/docker/overlay2/28126fe31ddfbf99d4588c5ac750503b9b6cee8f2e00781941cff5f4323d1c76/diff:/var/lib/docker/overlay2/173221e8ff5f6ec22870429dc8bb01272ade638fe47275d1daa1ce07952c8c8c/diff:/var/lib/docker/overlay2/53bc2e4fa25c7c694765cf4d57e59552b26eed732ba59b8d74b221ea0ada7479/diff:/var/lib/docker/overlay2/0175a1cb7e1c92247b6cbac270da49d811d3508e193c1c2e47bef8cb769a47bf/diff:/var/lib/docker/overlay2/1651afeb98cf83af553f3e0a4dcb564af84cb440147702fddc0133cdae08e879/diff:/var/lib/docker/overlay2/64c0563be8f8902fe0e9d207f839be98da0e23d6d839403a7e5498f0468e6b75/diff:/var/lib/d
ocker/overlay2/c9ce1f7c22e681e0e279ce7656144b4c75e2d24f7297acad69e621f2756b75e8/diff:/var/lib/docker/overlay2/469e6b8bbc078383af3dc64c254906774ab56a833c4de1dd39b46bfa27fee3b0/diff:/var/lib/docker/overlay2/797536b923ada4261904843a51188063c401089eae5fec971f1dfb3c21d000a8/diff:/var/lib/docker/overlay2/9d5cbab97d75347795fc5814806362514e12ffc998b48411ecf1d23f980badf6/diff:/var/lib/docker/overlay2/8681f41e3df379cfdc8fd52b2e5837512b8e36a64c87fb159548a876d16e691d/diff:/var/lib/docker/overlay2/78ea1917a0aca968f65d796cbc2ee95d7d190a700496b9e47b29884fe13b1bec/diff:/var/lib/docker/overlay2/c3c231a74b7992f1fdc04b623c10d7791eab4fd97c803ed2610d595097c682c2/diff:/var/lib/docker/overlay2/f7b38a2a87d3958318f3a4b244e33d8096cd001a8fcb916eec022b0679382900/diff:/var/lib/docker/overlay2/1107be3397c1dfc460decaf307982d427cc431beb7938fb0e78f4f7cbc0afd3b/diff:/var/lib/docker/overlay2/59319d0c4cc381deff6e51ba1cc718b7cfd4fc557e74b8e83bd228afca501c8c/diff:/var/lib/docker/overlay2/9d031408a1ab9825246b7bba81db2596cb92e70fbedc70fba9be1d25320
8529f/diff:/var/lib/docker/overlay2/5800b717c814431baf3cc17520b099e7e011915cdeb46ed6360ec2f831a926ec/diff:/var/lib/docker/overlay2/6660d8dfc6dbb7f41f871ff7ec7e0fdfdb2ca3480141cdd4cce92869cb5382bf/diff:/var/lib/docker/overlay2/525a566b79110ff18e78880f2652b8d9cfcfbdbf3272fedb650933fcbbf869bd/diff:/var/lib/docker/overlay2/a0730aa8e0952fc661048d3a7f986ff246a5b2a29ad4e8195970bb407e3cecfc/diff:/var/lib/docker/overlay2/f9d25765ae914f5f013b5f46292f03c39d82f675d2102904f5fabecf07189337/diff:/var/lib/docker/overlay2/b7b34d126964ff60bc0fb625da7343e53e8bb4e77d5e72e33257c28b3d395ca3/diff:/var/lib/docker/overlay2/8514d6cc3775ebdd86f683a78503419bc97346c0059ac0d48563d2edec9727f0/diff:/var/lib/docker/overlay2/dd6f7bead5718d42040b4ee90ce09fa15720d04f3c58e2964b1d5b241bf5b7ed/diff:/var/lib/docker/overlay2/1a8adeb0355a42e9284e241b527930f4f6b7108d459c10550d5d0c4451f9e924/diff:/var/lib/docker/overlay2/703991449e0070d93945e4435da56144ce54e461e877e0b084144663d46b10f6/diff:/var/lib/docker/overlay2/65855711cfa445d374537e6268fd1bea2f62ea
bf2f590842933254b4755050dd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a8a0164abb3fd782df784bf984b10c980312d3270f36e5612924a87806ba5a19/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a8a0164abb3fd782df784bf984b10c980312d3270f36e5612924a87806ba5a19/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a8a0164abb3fd782df784bf984b10c980312d3270f36e5612924a87806ba5a19/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-different-port-20220921221118-10174",
	                "Source": "/var/lib/docker/volumes/default-k8s-different-port-20220921221118-10174/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-different-port-20220921221118-10174",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-different-port-20220921221118-10174",
	                "name.minikube.sigs.k8s.io": "default-k8s-different-port-20220921221118-10174",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2c16cce9402b8d39506117583a7fad80a94710d15dab294e1374d69074b6b894",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49418"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49417"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49414"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49416"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49415"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/2c16cce9402b",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-different-port-20220921221118-10174": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "37728b19138a",
	                        "default-k8s-different-port-20220921221118-10174"
	                    ],
	                    "NetworkID": "e093ea2ee154cf6d0e5d3b4a191700b36287f8ecd49e1b54f684a8f299ea6b79",
	                    "EndpointID": "adb7408d4c9675e8a8c7221c5c44296bade020a1fef2417db2c78e1b8536881c",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220921221118-10174 -n default-k8s-different-port-20220921221118-10174
helpers_test.go:244: <<< TestStartStop/group/default-k8s-different-port/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/DeployApp]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-different-port-20220921221118-10174 logs -n 25
helpers_test.go:252: TestStartStop/group/default-k8s-different-port/serial/DeployApp logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| Command |                            Args                            |                     Profile                     |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p                                                         | enable-default-cni-20220921215523-10174         | jenkins | v1.27.0 | 21 Sep 22 22:11 UTC | 21 Sep 22 22:11 UTC |
	|         | enable-default-cni-20220921215523-10174                    |                                                 |         |         |                     |                     |
	| start   | -p                                                         | default-k8s-different-port-20220921221118-10174 | jenkins | v1.27.0 | 21 Sep 22 22:11 UTC |                     |
	|         | default-k8s-different-port-20220921221118-10174            |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true                |                                                 |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker                      |                                                 |         |         |                     |                     |
	|         |  --container-runtime=containerd                            |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.2                               |                                                 |         |         |                     |                     |
	| ssh     | -p                                                         | old-k8s-version-20220921220722-10174            | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | old-k8s-version-20220921220722-10174                       |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                     |                     |
	| pause   | -p                                                         | old-k8s-version-20220921220722-10174            | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | old-k8s-version-20220921220722-10174                       |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| unpause | -p                                                         | old-k8s-version-20220921220722-10174            | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | old-k8s-version-20220921220722-10174                       |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | old-k8s-version-20220921220722-10174            | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | old-k8s-version-20220921220722-10174                       |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | old-k8s-version-20220921220722-10174            | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | old-k8s-version-20220921220722-10174                       |                                                 |         |         |                     |                     |
	| start   | -p newest-cni-20220921221720-10174 --memory=2200           | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.2                               |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | embed-certs-20220921220439-10174                | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | embed-certs-20220921220439-10174                           |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                     |                     |
	| stop    | -p                                                         | embed-certs-20220921220439-10174                | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | embed-certs-20220921220439-10174                           |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | embed-certs-20220921220439-10174                | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | embed-certs-20220921220439-10174                           |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                     |                     |
	| start   | -p                                                         | embed-certs-20220921220439-10174                | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC |                     |
	|         | embed-certs-20220921220439-10174                           |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                     |                     |
	|         | --wait=true --embed-certs                                  |                                                 |         |         |                     |                     |
	|         | --driver=docker                                            |                                                 |         |         |                     |                     |
	|         | --container-runtime=containerd                             |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.2                               |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | newest-cni-20220921221720-10174                            |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                     |                     |
	| stop    | -p                                                         | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | newest-cni-20220921221720-10174                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | newest-cni-20220921221720-10174                            |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                     |                     |
	| start   | -p newest-cni-20220921221720-10174 --memory=2200           | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:18 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.2                               |                                                 |         |         |                     |                     |
	| ssh     | -p                                                         | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:18 UTC | 21 Sep 22 22:18 UTC |
	|         | newest-cni-20220921221720-10174                            |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                     |                     |
	| pause   | -p                                                         | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:18 UTC | 21 Sep 22 22:18 UTC |
	|         | newest-cni-20220921221720-10174                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| unpause | -p                                                         | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:18 UTC | 21 Sep 22 22:18 UTC |
	|         | newest-cni-20220921221720-10174                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:18 UTC | 21 Sep 22 22:18 UTC |
	|         | newest-cni-20220921221720-10174                            |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:18 UTC | 21 Sep 22 22:18 UTC |
	|         | newest-cni-20220921221720-10174                            |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | no-preload-20220921220832-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:21 UTC | 21 Sep 22 22:21 UTC |
	|         | no-preload-20220921220832-10174                            |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                     |                     |
	| stop    | -p                                                         | no-preload-20220921220832-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:21 UTC | 21 Sep 22 22:21 UTC |
	|         | no-preload-20220921220832-10174                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | no-preload-20220921220832-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:21 UTC | 21 Sep 22 22:21 UTC |
	|         | no-preload-20220921220832-10174                            |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                     |                     |
	| start   | -p                                                         | no-preload-20220921220832-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:21 UTC |                     |
	|         | no-preload-20220921220832-10174                            |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                     |                     |
	|         | --wait=true --preload=false                                |                                                 |         |         |                     |                     |
	|         | --driver=docker                                            |                                                 |         |         |                     |                     |
	|         | --container-runtime=containerd                             |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.2                               |                                                 |         |         |                     |                     |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/09/21 22:21:21
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.19.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0921 22:21:21.729027  276511 out.go:296] Setting OutFile to fd 1 ...
	I0921 22:21:21.729174  276511 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:21:21.729189  276511 out.go:309] Setting ErrFile to fd 2...
	I0921 22:21:21.729194  276511 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:21:21.729308  276511 root.go:333] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/bin
	I0921 22:21:21.729870  276511 out.go:303] Setting JSON to false
	I0921 22:21:21.731566  276511 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3833,"bootTime":1663795049,"procs":716,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1017-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0921 22:21:21.731629  276511 start.go:125] virtualization: kvm guest
	I0921 22:21:21.734495  276511 out.go:177] * [no-preload-20220921220832-10174] minikube v1.27.0 on Ubuntu 20.04 (kvm/amd64)
	I0921 22:21:21.736412  276511 notify.go:214] Checking for updates...
	I0921 22:21:21.737826  276511 out.go:177]   - MINIKUBE_LOCATION=14995
	I0921 22:21:21.739371  276511 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0921 22:21:21.740848  276511 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
	I0921 22:21:21.742164  276511 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube
	I0921 22:21:21.743463  276511 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0921 22:21:21.745159  276511 config.go:180] Loaded profile config "no-preload-20220921220832-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.2
	I0921 22:21:21.745572  276511 driver.go:365] Setting default libvirt URI to qemu:///system
	I0921 22:21:21.776785  276511 docker.go:137] docker version: linux-20.10.18
	I0921 22:21:21.776874  276511 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:21:21.873005  276511 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2022-09-21 22:21:21.797949632 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1017-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.18 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0921 22:21:21.873105  276511 docker.go:254] overlay module found
	I0921 22:21:21.875489  276511 out.go:177] * Using the docker driver based on existing profile
	I0921 22:21:21.876982  276511 start.go:284] selected driver: docker
	I0921 22:21:21.877000  276511 start.go:808] validating driver "docker" against &{Name:no-preload-20220921220832-10174 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:no-preload-20220921220832-10174 Namespace:default APIServerName:mi
nikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-hos
t Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 22:21:21.877104  276511 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0921 22:21:21.877949  276511 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:21:21.972195  276511 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2022-09-21 22:21:21.898685177 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1017-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.18 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0921 22:21:21.972596  276511 start_flags.go:867] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0921 22:21:21.972625  276511 cni.go:95] Creating CNI manager for ""
	I0921 22:21:21.972634  276511 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0921 22:21:21.972657  276511 start_flags.go:316] config:
	{Name:no-preload-20220921220832-10174 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:no-preload-20220921220832-10174 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 22:21:21.975206  276511 out.go:177] * Starting control plane node no-preload-20220921220832-10174 in cluster no-preload-20220921220832-10174
	I0921 22:21:21.976541  276511 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0921 22:21:21.978261  276511 out.go:177] * Pulling base image ...
	I0921 22:21:21.979898  276511 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime containerd
	I0921 22:21:21.980011  276511 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	I0921 22:21:21.980055  276511 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/no-preload-20220921220832-10174/config.json ...
	I0921 22:21:21.980230  276511 cache.go:107] acquiring lock: {Name:mk964a2e66a5444defeab854e6434a6f27bdb527 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:21:21.980240  276511 cache.go:107] acquiring lock: {Name:mka10a341c76ae214d12cf65b1bbb970ff641c5f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:21:21.980291  276511 cache.go:107] acquiring lock: {Name:mkb5c943b9da9e6c7ecc443b377ab990272f1b2f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:21:21.980336  276511 cache.go:107] acquiring lock: {Name:mk944562b9b2415f3d8e7ad36b373f92205bdb5c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:21:21.980366  276511 cache.go:107] acquiring lock: {Name:mk6ae321142fb89935897137e30217f9ae2499ae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:21:21.980402  276511 cache.go:107] acquiring lock: {Name:mk0eb3fbf1ee9e76ad78bfdee22277edae17ed2a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:21:21.980366  276511 cache.go:107] acquiring lock: {Name:mk4fab6516978f221b8246a61f380f8ab97f066c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:21:21.980335  276511 cache.go:107] acquiring lock: {Name:mkee4799116b59e3f65d0127cdad0c25a01a05e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:21:21.980556  276511 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.25.2 exists
	I0921 22:21:21.980581  276511 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.9.3 exists
	I0921 22:21:21.980559  276511 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.25.2 exists
	I0921 22:21:21.980583  276511 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0921 22:21:21.980592  276511 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.25.2" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.25.2" took 362.285µs
	I0921 22:21:21.980608  276511 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.25.2 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.25.2 succeeded
	I0921 22:21:21.980603  276511 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.9.3" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.9.3" took 272.508µs
	I0921 22:21:21.980617  276511 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.9.3 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.9.3 succeeded
	I0921 22:21:21.980614  276511 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 397.033µs
	I0921 22:21:21.980610  276511 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.25.2" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.25.2" took 300.17µs
	I0921 22:21:21.980625  276511 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0921 22:21:21.980629  276511 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.25.2 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.25.2 succeeded
	I0921 22:21:21.980647  276511 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.25.2 exists
	I0921 22:21:21.980673  276511 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.25.2" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.25.2" took 420.957µs
	I0921 22:21:21.980689  276511 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.25.2 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.25.2 succeeded
	I0921 22:21:21.980713  276511 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.4-0 exists
	I0921 22:21:21.980730  276511 cache.go:96] cache image "registry.k8s.io/etcd:3.5.4-0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.4-0" took 401.678µs
	I0921 22:21:21.980744  276511 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.4-0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.4-0 succeeded
	I0921 22:21:21.980757  276511 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/pause_3.8 exists
	I0921 22:21:21.980790  276511 cache.go:96] cache image "registry.k8s.io/pause:3.8" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/pause_3.8" took 470.77µs
	I0921 22:21:21.980807  276511 cache.go:80] save to tar file registry.k8s.io/pause:3.8 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/pause_3.8 succeeded
	I0921 22:21:21.980833  276511 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.25.2 exists
	I0921 22:21:21.980848  276511 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.25.2" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.25.2" took 492.866µs
	I0921 22:21:21.980861  276511 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.25.2 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.25.2 succeeded
	I0921 22:21:21.980876  276511 cache.go:87] Successfully saved all images to host disk.
	I0921 22:21:22.004613  276511 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon, skipping pull
	I0921 22:21:22.004656  276511 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c exists in daemon, skipping load
	I0921 22:21:22.004676  276511 cache.go:208] Successfully downloaded all kic artifacts
	I0921 22:21:22.004708  276511 start.go:364] acquiring machines lock for no-preload-20220921220832-10174: {Name:mk189db360f5ac486cb35206c34214af6d1c65b0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:21:22.004793  276511 start.go:368] acquired machines lock for "no-preload-20220921220832-10174" in 64.56µs
	I0921 22:21:22.004813  276511 start.go:96] Skipping create...Using existing machine configuration
	I0921 22:21:22.004818  276511 fix.go:55] fixHost starting: 
	I0921 22:21:22.005039  276511 cli_runner.go:164] Run: docker container inspect no-preload-20220921220832-10174 --format={{.State.Status}}
	I0921 22:21:22.028746  276511 fix.go:103] recreateIfNeeded on no-preload-20220921220832-10174: state=Stopped err=<nil>
	W0921 22:21:22.028785  276511 fix.go:129] unexpected machine state, will restart: <nil>
	I0921 22:21:22.031134  276511 out.go:177] * Restarting existing docker container for "no-preload-20220921220832-10174" ...
	I0921 22:21:19.977941  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:21:22.477413  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:21:22.032731  276511 cli_runner.go:164] Run: docker start no-preload-20220921220832-10174
	I0921 22:21:22.397294  276511 cli_runner.go:164] Run: docker container inspect no-preload-20220921220832-10174 --format={{.State.Status}}
	I0921 22:21:22.425241  276511 kic.go:415] container "no-preload-20220921220832-10174" state is running.
	I0921 22:21:22.425628  276511 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20220921220832-10174
	I0921 22:21:22.452469  276511 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/no-preload-20220921220832-10174/config.json ...
	I0921 22:21:22.452688  276511 machine.go:88] provisioning docker machine ...
	I0921 22:21:22.452713  276511 ubuntu.go:169] provisioning hostname "no-preload-20220921220832-10174"
	I0921 22:21:22.452750  276511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220832-10174
	I0921 22:21:22.481744  276511 main.go:134] libmachine: Using SSH client type: native
	I0921 22:21:22.481925  276511 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ec520] 0x7ef6a0 <nil>  [] 0s} 127.0.0.1 49438 <nil> <nil>}
	I0921 22:21:22.481949  276511 main.go:134] libmachine: About to run SSH command:
	sudo hostname no-preload-20220921220832-10174 && echo "no-preload-20220921220832-10174" | sudo tee /etc/hostname
	I0921 22:21:22.482598  276511 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35926->127.0.0.1:49438: read: connection reset by peer
	I0921 22:21:25.619844  276511 main.go:134] libmachine: SSH cmd err, output: <nil>: no-preload-20220921220832-10174
	
	I0921 22:21:25.619917  276511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220832-10174
	I0921 22:21:25.644377  276511 main.go:134] libmachine: Using SSH client type: native
	I0921 22:21:25.644520  276511 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ec520] 0x7ef6a0 <nil>  [] 0s} 127.0.0.1 49438 <nil> <nil>}
	I0921 22:21:25.644541  276511 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-20220921220832-10174' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-20220921220832-10174/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-20220921220832-10174' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0921 22:21:25.771438  276511 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0921 22:21:25.771470  276511 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube}
	I0921 22:21:25.771545  276511 ubuntu.go:177] setting up certificates
	I0921 22:21:25.771554  276511 provision.go:83] configureAuth start
	I0921 22:21:25.771606  276511 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20220921220832-10174
	I0921 22:21:25.795693  276511 provision.go:138] copyHostCerts
	I0921 22:21:25.795778  276511 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.pem, removing ...
	I0921 22:21:25.795798  276511 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.pem
	I0921 22:21:25.795864  276511 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.pem (1078 bytes)
	I0921 22:21:25.795944  276511 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cert.pem, removing ...
	I0921 22:21:25.795955  276511 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cert.pem
	I0921 22:21:25.795981  276511 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cert.pem (1123 bytes)
	I0921 22:21:25.796035  276511 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/key.pem, removing ...
	I0921 22:21:25.796044  276511 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/key.pem
	I0921 22:21:25.796066  276511 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/key.pem (1675 bytes)
	I0921 22:21:25.796151  276511 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca-key.pem org=jenkins.no-preload-20220921220832-10174 san=[192.168.94.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-20220921220832-10174]
	I0921 22:21:25.980041  276511 provision.go:172] copyRemoteCerts
	I0921 22:21:25.980129  276511 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0921 22:21:25.980174  276511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220832-10174
	I0921 22:21:26.005654  276511 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49438 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/no-preload-20220921220832-10174/id_rsa Username:docker}
	I0921 22:21:26.099196  276511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0921 22:21:26.116665  276511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0921 22:21:26.133700  276511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0921 22:21:26.150095  276511 provision.go:86] duration metric: configureAuth took 378.527139ms
	I0921 22:21:26.150126  276511 ubuntu.go:193] setting minikube options for container-runtime
	I0921 22:21:26.150282  276511 config.go:180] Loaded profile config "no-preload-20220921220832-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.2
	I0921 22:21:26.150293  276511 machine.go:91] provisioned docker machine in 3.697591605s
	I0921 22:21:26.150301  276511 start.go:300] post-start starting for "no-preload-20220921220832-10174" (driver="docker")
	I0921 22:21:26.150307  276511 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0921 22:21:26.150350  276511 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0921 22:21:26.150391  276511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220832-10174
	I0921 22:21:26.177098  276511 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49438 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/no-preload-20220921220832-10174/id_rsa Username:docker}
	I0921 22:21:26.266994  276511 ssh_runner.go:195] Run: cat /etc/os-release
	I0921 22:21:26.269733  276511 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0921 22:21:26.269758  276511 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0921 22:21:26.269766  276511 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0921 22:21:26.269773  276511 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0921 22:21:26.269784  276511 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/addons for local assets ...
	I0921 22:21:26.269843  276511 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files for local assets ...
	I0921 22:21:26.269931  276511 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/101742.pem -> 101742.pem in /etc/ssl/certs
	I0921 22:21:26.270038  276511 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0921 22:21:26.276595  276511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/101742.pem --> /etc/ssl/certs/101742.pem (1708 bytes)
	I0921 22:21:26.293384  276511 start.go:303] post-start completed in 143.069577ms
	I0921 22:21:26.293459  276511 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:21:26.293509  276511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220832-10174
	I0921 22:21:26.319279  276511 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49438 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/no-preload-20220921220832-10174/id_rsa Username:docker}
	I0921 22:21:26.412318  276511 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:21:26.416228  276511 fix.go:57] fixHost completed within 4.411406055s
	I0921 22:21:26.416252  276511 start.go:83] releasing machines lock for "no-preload-20220921220832-10174", held for 4.411447835s
	I0921 22:21:26.416336  276511 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20220921220832-10174
	I0921 22:21:26.439824  276511 ssh_runner.go:195] Run: systemctl --version
	I0921 22:21:26.439875  276511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220832-10174
	I0921 22:21:26.439894  276511 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0921 22:21:26.439973  276511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220832-10174
	I0921 22:21:26.463981  276511 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49438 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/no-preload-20220921220832-10174/id_rsa Username:docker}
	I0921 22:21:26.464292  276511 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49438 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/no-preload-20220921220832-10174/id_rsa Username:docker}
	I0921 22:21:26.585502  276511 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0921 22:21:26.597003  276511 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0921 22:21:26.606196  276511 docker.go:188] disabling docker service ...
	I0921 22:21:26.606244  276511 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0921 22:21:26.615407  276511 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0921 22:21:26.623690  276511 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0921 22:21:26.699874  276511 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0921 22:21:24.477809  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:21:26.976994  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:21:26.778612  276511 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0921 22:21:26.787337  276511 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0921 22:21:26.799540  276511 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "registry.k8s.io/pause:3.8"|' -i /etc/containerd/config.toml"
	I0921 22:21:26.807935  276511 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0921 22:21:26.815661  276511 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0921 22:21:26.823769  276511 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0921 22:21:26.831216  276511 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0921 22:21:26.837204  276511 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0921 22:21:26.843235  276511 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0921 22:21:26.913162  276511 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0921 22:21:26.985402  276511 start.go:450] Will wait 60s for socket path /run/containerd/containerd.sock
	I0921 22:21:26.985482  276511 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0921 22:21:26.989229  276511 start.go:471] Will wait 60s for crictl version
	I0921 22:21:26.989292  276511 ssh_runner.go:195] Run: sudo crictl version
	I0921 22:21:27.015951  276511 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-09-21T22:21:27Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0921 22:21:28.977565  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:21:31.477682  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:21:33.976943  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:21:36.476620  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:21:38.063256  276511 ssh_runner.go:195] Run: sudo crictl version
	I0921 22:21:38.087330  276511 start.go:480] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.8
	RuntimeApiVersion:  v1alpha2
	I0921 22:21:38.087394  276511 ssh_runner.go:195] Run: containerd --version
	I0921 22:21:38.117027  276511 ssh_runner.go:195] Run: containerd --version
	I0921 22:21:38.148570  276511 out.go:177] * Preparing Kubernetes v1.25.2 on containerd 1.6.8 ...
	I0921 22:21:38.150093  276511 cli_runner.go:164] Run: docker network inspect no-preload-20220921220832-10174 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 22:21:38.172557  276511 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0921 22:21:38.175833  276511 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0921 22:21:38.185102  276511 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime containerd
	I0921 22:21:38.185143  276511 ssh_runner.go:195] Run: sudo crictl images --output json
	I0921 22:21:38.207088  276511 containerd.go:553] all images are preloaded for containerd runtime.
	I0921 22:21:38.207109  276511 cache_images.go:84] Images are preloaded, skipping loading
	I0921 22:21:38.207180  276511 ssh_runner.go:195] Run: sudo crictl info
	I0921 22:21:38.230239  276511 cni.go:95] Creating CNI manager for ""
	I0921 22:21:38.230269  276511 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0921 22:21:38.230283  276511 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0921 22:21:38.230305  276511 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.25.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-20220921220832-10174 NodeName:no-preload-20220921220832-10174 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/mini
kube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
	I0921 22:21:38.230491  276511 kubeadm.go:161] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "no-preload-20220921220832-10174"
	  kubeletExtraArgs:
	    node-ip: 192.168.94.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0921 22:21:38.230603  276511 kubeadm.go:962] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=no-preload-20220921220832-10174 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.2 ClusterName:no-preload-20220921220832-10174 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0921 22:21:38.230653  276511 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.2
	I0921 22:21:38.237825  276511 binaries.go:44] Found k8s binaries, skipping transfer
	I0921 22:21:38.237881  276511 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0921 22:21:38.244824  276511 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (524 bytes)
	I0921 22:21:38.257993  276511 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0921 22:21:38.270025  276511 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2060 bytes)
	I0921 22:21:38.282061  276511 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0921 22:21:38.285065  276511 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0921 22:21:38.294394  276511 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/no-preload-20220921220832-10174 for IP: 192.168.94.2
	I0921 22:21:38.294515  276511 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.key
	I0921 22:21:38.294555  276511 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/proxy-client-ca.key
	I0921 22:21:38.294619  276511 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/no-preload-20220921220832-10174/client.key
	I0921 22:21:38.294690  276511 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/no-preload-20220921220832-10174/apiserver.key.ad8e880a
	I0921 22:21:38.294731  276511 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/no-preload-20220921220832-10174/proxy-client.key
	I0921 22:21:38.294821  276511 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/10174.pem (1338 bytes)
	W0921 22:21:38.294848  276511 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/10174_empty.pem, impossibly tiny 0 bytes
	I0921 22:21:38.294860  276511 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca-key.pem (1679 bytes)
	I0921 22:21:38.294885  276511 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem (1078 bytes)
	I0921 22:21:38.294912  276511 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/cert.pem (1123 bytes)
	I0921 22:21:38.294934  276511 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/key.pem (1675 bytes)
	I0921 22:21:38.294971  276511 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/101742.pem (1708 bytes)
	I0921 22:21:38.295476  276511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/no-preload-20220921220832-10174/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0921 22:21:38.312346  276511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/no-preload-20220921220832-10174/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0921 22:21:38.328491  276511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/no-preload-20220921220832-10174/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0921 22:21:38.344965  276511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/no-preload-20220921220832-10174/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0921 22:21:38.361363  276511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0921 22:21:38.378193  276511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0921 22:21:38.394663  276511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0921 22:21:38.411219  276511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0921 22:21:38.427455  276511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/10174.pem --> /usr/share/ca-certificates/10174.pem (1338 bytes)
	I0921 22:21:38.443759  276511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/101742.pem --> /usr/share/ca-certificates/101742.pem (1708 bytes)
	I0921 22:21:38.459952  276511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0921 22:21:38.477220  276511 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0921 22:21:38.490029  276511 ssh_runner.go:195] Run: openssl version
	I0921 22:21:38.494865  276511 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10174.pem && ln -fs /usr/share/ca-certificates/10174.pem /etc/ssl/certs/10174.pem"
	I0921 22:21:38.502105  276511 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10174.pem
	I0921 22:21:38.505092  276511 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Sep 21 21:32 /usr/share/ca-certificates/10174.pem
	I0921 22:21:38.505143  276511 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10174.pem
	I0921 22:21:38.510082  276511 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10174.pem /etc/ssl/certs/51391683.0"
	I0921 22:21:38.516779  276511 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/101742.pem && ln -fs /usr/share/ca-certificates/101742.pem /etc/ssl/certs/101742.pem"
	I0921 22:21:38.524387  276511 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/101742.pem
	I0921 22:21:38.527407  276511 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Sep 21 21:32 /usr/share/ca-certificates/101742.pem
	I0921 22:21:38.527449  276511 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/101742.pem
	I0921 22:21:38.532184  276511 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/101742.pem /etc/ssl/certs/3ec20f2e.0"
	I0921 22:21:38.538593  276511 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0921 22:21:38.545959  276511 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0921 22:21:38.548914  276511 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Sep 21 21:28 /usr/share/ca-certificates/minikubeCA.pem
	I0921 22:21:38.548957  276511 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0921 22:21:38.553573  276511 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0921 22:21:38.560211  276511 kubeadm.go:396] StartCluster: {Name:no-preload-20220921220832-10174 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:no-preload-20220921220832-10174 Namespace:default APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 22:21:38.560292  276511 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0921 22:21:38.560329  276511 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0921 22:21:38.584578  276511 cri.go:87] found id: "8c81e8e062ce578d93aaa7742d6ea9313a45d09b7520be04efd4ddd67bd4734b"
	I0921 22:21:38.584604  276511 cri.go:87] found id: "3eefbcb898b092a06a25d126ffaed982346f2b3691953b1cfaddc748fee80843"
	I0921 22:21:38.584611  276511 cri.go:87] found id: "6a4b91f0531d15b584576f58078373b62517efb45934f9088111d572c5de1646"
	I0921 22:21:38.584617  276511 cri.go:87] found id: "a9c3d39d9942f75aca084150dc32f191aed558a4eca281497df9885089a3cacb"
	I0921 22:21:38.584622  276511 cri.go:87] found id: "b69529a7e224f2addaf76a8ffcfd12b9dfc7f85844417e58938c8a2a1dc26be0"
	I0921 22:21:38.584629  276511 cri.go:87] found id: "b1a22ede66e3113827144deb16ed1bddbe2e6b60af54e01396e6daf924a4c409"
	I0921 22:21:38.584635  276511 cri.go:87] found id: ""
	I0921 22:21:38.584680  276511 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0921 22:21:38.597489  276511 cri.go:114] JSON = null
	W0921 22:21:38.597556  276511 kubeadm.go:403] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I0921 22:21:38.597640  276511 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0921 22:21:38.604641  276511 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I0921 22:21:38.604678  276511 kubeadm.go:627] restartCluster start
	I0921 22:21:38.604716  276511 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0921 22:21:38.611273  276511 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:21:38.611984  276511 kubeconfig.go:116] verify returned: extract IP: "no-preload-20220921220832-10174" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
	I0921 22:21:38.612435  276511 kubeconfig.go:127] "no-preload-20220921220832-10174" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig - will repair!
	I0921 22:21:38.613052  276511 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig: {Name:mkdec5faf1bb182b50a3cd5458b352f38075dc15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:21:38.614343  276511 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0921 22:21:38.620864  276511 api_server.go:165] Checking apiserver status ...
	I0921 22:21:38.620917  276511 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:21:38.628681  276511 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:21:38.829072  276511 api_server.go:165] Checking apiserver status ...
	I0921 22:21:38.829161  276511 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:21:38.837312  276511 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:21:39.029609  276511 api_server.go:165] Checking apiserver status ...
	I0921 22:21:39.029716  276511 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:21:39.038394  276511 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:21:39.229726  276511 api_server.go:165] Checking apiserver status ...
	I0921 22:21:39.229799  276511 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:21:39.238375  276511 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:21:39.429768  276511 api_server.go:165] Checking apiserver status ...
	I0921 22:21:39.429867  276511 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:21:39.438213  276511 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:21:39.629500  276511 api_server.go:165] Checking apiserver status ...
	I0921 22:21:39.629592  276511 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:21:39.638208  276511 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:21:39.829520  276511 api_server.go:165] Checking apiserver status ...
	I0921 22:21:39.829665  276511 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:21:39.838208  276511 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:21:40.029479  276511 api_server.go:165] Checking apiserver status ...
	I0921 22:21:40.029573  276511 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:21:40.038635  276511 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:21:40.228885  276511 api_server.go:165] Checking apiserver status ...
	I0921 22:21:40.228956  276511 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:21:40.237569  276511 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:21:40.429785  276511 api_server.go:165] Checking apiserver status ...
	I0921 22:21:40.429859  276511 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:21:40.438642  276511 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:21:40.628883  276511 api_server.go:165] Checking apiserver status ...
	I0921 22:21:40.628958  276511 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:21:40.637446  276511 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:21:40.829709  276511 api_server.go:165] Checking apiserver status ...
	I0921 22:21:40.829789  276511 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:21:40.838273  276511 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:21:41.029560  276511 api_server.go:165] Checking apiserver status ...
	I0921 22:21:41.029638  276511 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:21:41.038065  276511 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:21:41.229380  276511 api_server.go:165] Checking apiserver status ...
	I0921 22:21:41.229482  276511 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:21:41.238040  276511 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:21:41.429329  276511 api_server.go:165] Checking apiserver status ...
	I0921 22:21:41.429408  276511 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:21:41.437964  276511 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:21:41.629268  276511 api_server.go:165] Checking apiserver status ...
	I0921 22:21:41.629339  276511 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:21:41.637793  276511 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:21:41.637813  276511 api_server.go:165] Checking apiserver status ...
	I0921 22:21:41.637849  276511 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:21:41.645663  276511 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:21:41.645692  276511 kubeadm.go:602] needs reconfigure: apiserver error: timed out waiting for the condition
	I0921 22:21:41.645700  276511 kubeadm.go:1114] stopping kube-system containers ...
	I0921 22:21:41.645711  276511 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0921 22:21:41.645761  276511 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0921 22:21:41.669678  276511 cri.go:87] found id: "8c81e8e062ce578d93aaa7742d6ea9313a45d09b7520be04efd4ddd67bd4734b"
	I0921 22:21:41.669709  276511 cri.go:87] found id: "3eefbcb898b092a06a25d126ffaed982346f2b3691953b1cfaddc748fee80843"
	I0921 22:21:41.669719  276511 cri.go:87] found id: "6a4b91f0531d15b584576f58078373b62517efb45934f9088111d572c5de1646"
	I0921 22:21:41.669728  276511 cri.go:87] found id: "a9c3d39d9942f75aca084150dc32f191aed558a4eca281497df9885089a3cacb"
	I0921 22:21:41.669736  276511 cri.go:87] found id: "b69529a7e224f2addaf76a8ffcfd12b9dfc7f85844417e58938c8a2a1dc26be0"
	I0921 22:21:41.669746  276511 cri.go:87] found id: "b1a22ede66e3113827144deb16ed1bddbe2e6b60af54e01396e6daf924a4c409"
	I0921 22:21:41.669758  276511 cri.go:87] found id: ""
	I0921 22:21:41.669765  276511 cri.go:232] Stopping containers: [8c81e8e062ce578d93aaa7742d6ea9313a45d09b7520be04efd4ddd67bd4734b 3eefbcb898b092a06a25d126ffaed982346f2b3691953b1cfaddc748fee80843 6a4b91f0531d15b584576f58078373b62517efb45934f9088111d572c5de1646 a9c3d39d9942f75aca084150dc32f191aed558a4eca281497df9885089a3cacb b69529a7e224f2addaf76a8ffcfd12b9dfc7f85844417e58938c8a2a1dc26be0 b1a22ede66e3113827144deb16ed1bddbe2e6b60af54e01396e6daf924a4c409]
	I0921 22:21:41.669831  276511 ssh_runner.go:195] Run: which crictl
	I0921 22:21:41.672722  276511 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop 8c81e8e062ce578d93aaa7742d6ea9313a45d09b7520be04efd4ddd67bd4734b 3eefbcb898b092a06a25d126ffaed982346f2b3691953b1cfaddc748fee80843 6a4b91f0531d15b584576f58078373b62517efb45934f9088111d572c5de1646 a9c3d39d9942f75aca084150dc32f191aed558a4eca281497df9885089a3cacb b69529a7e224f2addaf76a8ffcfd12b9dfc7f85844417e58938c8a2a1dc26be0 b1a22ede66e3113827144deb16ed1bddbe2e6b60af54e01396e6daf924a4c409
	I0921 22:21:41.698115  276511 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0921 22:21:41.708176  276511 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0921 22:21:41.715094  276511 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Sep 21 22:08 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Sep 21 22:08 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2063 Sep 21 22:08 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Sep 21 22:08 /etc/kubernetes/scheduler.conf
	
	I0921 22:21:41.715152  276511 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0921 22:21:41.721698  276511 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0921 22:21:41.728286  276511 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0921 22:21:38.477722  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:21:40.976919  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:21:42.977016  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:21:41.734815  276511 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:21:41.734874  276511 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0921 22:21:41.741153  276511 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0921 22:21:41.747551  276511 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:21:41.747599  276511 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0921 22:21:41.753773  276511 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0921 22:21:41.760238  276511 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0921 22:21:41.760255  276511 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0921 22:21:41.804588  276511 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0921 22:21:42.356962  276511 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0921 22:21:42.489434  276511 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0921 22:21:42.539390  276511 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0921 22:21:42.683809  276511 api_server.go:51] waiting for apiserver process to appear ...
	I0921 22:21:42.683920  276511 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 22:21:43.194560  276511 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 22:21:43.694761  276511 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 22:21:43.776158  276511 api_server.go:71] duration metric: took 1.092348408s to wait for apiserver process to appear ...
	I0921 22:21:43.776236  276511 api_server.go:87] waiting for apiserver healthz status ...
	I0921 22:21:43.776260  276511 api_server.go:240] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0921 22:21:43.776614  276511 api_server.go:256] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I0921 22:21:44.276913  276511 api_server.go:240] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0921 22:21:46.667105  276511 api_server.go:266] https://192.168.94.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0921 22:21:46.667136  276511 api_server.go:102] status: https://192.168.94.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0921 22:21:45.477739  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:21:47.976841  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:21:46.777448  276511 api_server.go:240] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0921 22:21:46.781780  276511 api_server.go:266] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0921 22:21:46.781806  276511 api_server.go:102] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0921 22:21:47.277400  276511 api_server.go:240] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0921 22:21:47.282106  276511 api_server.go:266] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0921 22:21:47.282133  276511 api_server.go:102] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0921 22:21:47.777302  276511 api_server.go:240] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0921 22:21:47.781834  276511 api_server.go:266] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0921 22:21:47.781871  276511 api_server.go:102] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0921 22:21:48.277407  276511 api_server.go:240] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0921 22:21:48.283340  276511 api_server.go:266] https://192.168.94.2:8443/healthz returned 200:
	ok
	I0921 22:21:48.290556  276511 api_server.go:140] control plane version: v1.25.2
	I0921 22:21:48.290586  276511 api_server.go:130] duration metric: took 4.514332252s to wait for apiserver health ...
	I0921 22:21:48.290599  276511 cni.go:95] Creating CNI manager for ""
	I0921 22:21:48.290609  276511 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0921 22:21:48.293728  276511 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0921 22:21:48.295168  276511 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0921 22:21:48.298937  276511 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.2/kubectl ...
	I0921 22:21:48.298959  276511 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0921 22:21:48.313543  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0921 22:21:49.163078  276511 system_pods.go:43] waiting for kube-system pods to appear ...
	I0921 22:21:49.170085  276511 system_pods.go:59] 9 kube-system pods found
	I0921 22:21:49.170122  276511 system_pods.go:61] "coredns-565d847f94-m8xgt" [67685b7a-28c7-49a1-a4aa-e82aadc5a69b] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0921 22:21:49.170132  276511 system_pods.go:61] "etcd-no-preload-20220921220832-10174" [0fca2788-2ad8-4e18-b8e5-e39cefa36c58] Running
	I0921 22:21:49.170141  276511 system_pods.go:61] "kindnet-27cj5" [90383218-a547-458a-8b5e-af84c9d2b017] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0921 22:21:49.170148  276511 system_pods.go:61] "kube-apiserver-no-preload-20220921220832-10174" [3d9f96c7-a367-41ec-8423-c106fa567853] Running
	I0921 22:21:49.170160  276511 system_pods.go:61] "kube-controller-manager-no-preload-20220921220832-10174" [86ad77b8-aa2b-4d95-a588-48d9493546d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0921 22:21:49.170171  276511 system_pods.go:61] "kube-proxy-nxpf5" [ff6290f8-6cb7-4fae-99a2-7e36bb2e525b] Running
	I0921 22:21:49.170182  276511 system_pods.go:61] "kube-scheduler-no-preload-20220921220832-10174" [9c1e10b4-b7eb-4633-a544-62cbe7ed19d1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0921 22:21:49.170196  276511 system_pods.go:61] "metrics-server-5c8fd5cf8-l82b6" [c17d4483-0758-4a2c-b310-2451393c8fa9] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0921 22:21:49.170208  276511 system_pods.go:61] "storage-provisioner" [51a29d45-5827-48fc-a122-67c7c5c5d190] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0921 22:21:49.170220  276511 system_pods.go:74] duration metric: took 7.119308ms to wait for pod list to return data ...
	I0921 22:21:49.170236  276511 node_conditions.go:102] verifying NodePressure condition ...
	I0921 22:21:49.172624  276511 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0921 22:21:49.172663  276511 node_conditions.go:123] node cpu capacity is 8
	I0921 22:21:49.172674  276511 node_conditions.go:105] duration metric: took 2.43038ms to run NodePressure ...
	I0921 22:21:49.172699  276511 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0921 22:21:49.303995  276511 kubeadm.go:763] waiting for restarted kubelet to initialise ...
	I0921 22:21:49.307574  276511 kubeadm.go:778] kubelet initialised
	I0921 22:21:49.307598  276511 kubeadm.go:779] duration metric: took 3.577635ms waiting for restarted kubelet to initialise ...
	I0921 22:21:49.307604  276511 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0921 22:21:49.312287  276511 pod_ready.go:78] waiting up to 4m0s for pod "coredns-565d847f94-m8xgt" in "kube-system" namespace to be "Ready" ...
	I0921 22:21:51.318183  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:21:50.476802  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:21:52.977118  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:21:53.818525  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:21:56.318234  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:21:55.477148  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:21:57.473940  265259 pod_ready.go:81] duration metric: took 4m0.002309063s waiting for pod "coredns-565d847f94-qn9gp" in "kube-system" namespace to be "Ready" ...
	E0921 22:21:57.473968  265259 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-565d847f94-qn9gp" in "kube-system" namespace to be "Ready" (will not retry!)
	I0921 22:21:57.473989  265259 pod_ready.go:38] duration metric: took 4m0.007491689s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0921 22:21:57.474010  265259 kubeadm.go:631] restartCluster took 4m12.311396089s
	W0921 22:21:57.474123  265259 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0921 22:21:57.474151  265259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0921 22:22:00.342329  265259 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (2.868152928s)
	I0921 22:22:00.342387  265259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0921 22:22:00.351706  265259 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0921 22:22:00.358843  265259 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0921 22:22:00.358897  265259 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0921 22:22:00.365576  265259 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0921 22:22:00.365616  265259 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0921 22:22:00.405287  265259 kubeadm.go:317] [init] Using Kubernetes version: v1.25.2
	I0921 22:22:00.405348  265259 kubeadm.go:317] [preflight] Running pre-flight checks
	I0921 22:22:00.433369  265259 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I0921 22:22:00.433451  265259 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1017-gcp
	I0921 22:22:00.433486  265259 kubeadm.go:317] OS: Linux
	I0921 22:22:00.433611  265259 kubeadm.go:317] CGROUPS_CPU: enabled
	I0921 22:22:00.433682  265259 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I0921 22:22:00.433726  265259 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I0921 22:22:00.433768  265259 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I0921 22:22:00.433805  265259 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I0921 22:22:00.433852  265259 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I0921 22:22:00.433893  265259 kubeadm.go:317] CGROUPS_PIDS: enabled
	I0921 22:22:00.434000  265259 kubeadm.go:317] CGROUPS_HUGETLB: enabled
	I0921 22:22:00.434102  265259 kubeadm.go:317] CGROUPS_BLKIO: enabled
	I0921 22:22:00.502463  265259 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0921 22:22:00.502591  265259 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0921 22:22:00.502721  265259 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0921 22:22:00.621941  265259 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0921 22:22:00.626833  265259 out.go:204]   - Generating certificates and keys ...
	I0921 22:22:00.626978  265259 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0921 22:22:00.627053  265259 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0921 22:22:00.627158  265259 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0921 22:22:00.627246  265259 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I0921 22:22:00.627351  265259 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I0921 22:22:00.627410  265259 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I0921 22:22:00.627483  265259 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I0921 22:22:00.627551  265259 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I0921 22:22:00.627613  265259 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0921 22:22:00.627685  265259 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0921 22:22:00.627760  265259 kubeadm.go:317] [certs] Using the existing "sa" key
	I0921 22:22:00.627816  265259 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0921 22:22:00.721598  265259 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0921 22:22:00.898538  265259 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0921 22:22:00.999773  265259 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0921 22:22:01.056843  265259 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0921 22:22:01.068556  265259 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0921 22:22:01.069535  265259 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0921 22:22:01.069603  265259 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I0921 22:22:01.152435  265259 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0921 22:21:58.818354  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:22:01.317924  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:22:01.154531  265259 out.go:204]   - Booting up control plane ...
	I0921 22:22:01.154652  265259 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0921 22:22:01.154956  265259 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0921 22:22:01.156705  265259 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0921 22:22:01.157879  265259 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0921 22:22:01.159675  265259 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0921 22:22:03.318485  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:22:05.318662  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:22:07.161939  265259 kubeadm.go:317] [apiclient] All control plane components are healthy after 6.002148 seconds
	I0921 22:22:07.162112  265259 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0921 22:22:07.170819  265259 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0921 22:22:07.689049  265259 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs
	I0921 22:22:07.689253  265259 kubeadm.go:317] [mark-control-plane] Marking the node embed-certs-20220921220439-10174 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0921 22:22:08.196658  265259 kubeadm.go:317] [bootstrap-token] Using token: 6acdlb.hwh133k5t8mfdxv9
	I0921 22:22:08.198133  265259 out.go:204]   - Configuring RBAC rules ...
	I0921 22:22:08.198241  265259 kubeadm.go:317] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0921 22:22:08.202013  265259 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0921 22:22:08.206522  265259 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0921 22:22:08.208686  265259 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0921 22:22:08.210651  265259 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0921 22:22:08.212506  265259 kubeadm.go:317] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0921 22:22:08.219466  265259 kubeadm.go:317] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0921 22:22:08.394068  265259 kubeadm.go:317] [addons] Applied essential addon: CoreDNS
	I0921 22:22:08.606483  265259 kubeadm.go:317] [addons] Applied essential addon: kube-proxy
	I0921 22:22:08.607838  265259 kubeadm.go:317] 
	I0921 22:22:08.607927  265259 kubeadm.go:317] Your Kubernetes control-plane has initialized successfully!
	I0921 22:22:08.607941  265259 kubeadm.go:317] 
	I0921 22:22:08.608028  265259 kubeadm.go:317] To start using your cluster, you need to run the following as a regular user:
	I0921 22:22:08.608042  265259 kubeadm.go:317] 
	I0921 22:22:08.608070  265259 kubeadm.go:317]   mkdir -p $HOME/.kube
	I0921 22:22:08.608136  265259 kubeadm.go:317]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0921 22:22:08.608199  265259 kubeadm.go:317]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0921 22:22:08.608205  265259 kubeadm.go:317] 
	I0921 22:22:08.608270  265259 kubeadm.go:317] Alternatively, if you are the root user, you can run:
	I0921 22:22:08.608279  265259 kubeadm.go:317] 
	I0921 22:22:08.608333  265259 kubeadm.go:317]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0921 22:22:08.608341  265259 kubeadm.go:317] 
	I0921 22:22:08.608405  265259 kubeadm.go:317] You should now deploy a pod network to the cluster.
	I0921 22:22:08.608491  265259 kubeadm.go:317] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0921 22:22:08.608575  265259 kubeadm.go:317]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0921 22:22:08.608582  265259 kubeadm.go:317] 
	I0921 22:22:08.608682  265259 kubeadm.go:317] You can now join any number of control-plane nodes by copying certificate authorities
	I0921 22:22:08.608771  265259 kubeadm.go:317] and service account keys on each node and then running the following as root:
	I0921 22:22:08.608778  265259 kubeadm.go:317] 
	I0921 22:22:08.608870  265259 kubeadm.go:317]   kubeadm join control-plane.minikube.internal:8443 --token 6acdlb.hwh133k5t8mfdxv9 \
	I0921 22:22:08.608983  265259 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:b419e756666ce2e30bdac31039f21ad7242d26e2b9c03a55c5bc6fe8dcfddcf7 \
	I0921 22:22:08.609009  265259 kubeadm.go:317] 	--control-plane 
	I0921 22:22:08.609016  265259 kubeadm.go:317] 
	I0921 22:22:08.609128  265259 kubeadm.go:317] Then you can join any number of worker nodes by running the following on each as root:
	I0921 22:22:08.609134  265259 kubeadm.go:317] 
	I0921 22:22:08.609197  265259 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8443 --token 6acdlb.hwh133k5t8mfdxv9 \
	I0921 22:22:08.609284  265259 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:b419e756666ce2e30bdac31039f21ad7242d26e2b9c03a55c5bc6fe8dcfddcf7 
	I0921 22:22:08.611756  265259 kubeadm.go:317] W0921 22:22:00.400408    3296 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0921 22:22:08.612043  265259 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1017-gcp\n", err: exit status 1
	I0921 22:22:08.612188  265259 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0921 22:22:08.612219  265259 cni.go:95] Creating CNI manager for ""
	I0921 22:22:08.612229  265259 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0921 22:22:08.614511  265259 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0921 22:22:07.818323  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:22:10.317822  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:22:08.615918  265259 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0921 22:22:08.676246  265259 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.2/kubectl ...
	I0921 22:22:08.676274  265259 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0921 22:22:08.693653  265259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0921 22:22:09.436658  265259 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0921 22:22:09.436794  265259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:22:09.436795  265259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl label nodes minikube.k8s.io/version=v1.27.0 minikube.k8s.io/commit=937c68716dfaac5b5ffa3b6655158d5d3472b8c4 minikube.k8s.io/name=embed-certs-20220921220439-10174 minikube.k8s.io/updated_at=2022_09_21T22_22_09_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:22:09.443334  265259 ops.go:34] apiserver oom_adj: -16
	I0921 22:22:09.528958  265259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:22:10.110988  265259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:22:10.611127  265259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:22:11.110374  265259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:22:11.610630  265259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:22:12.110987  265259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:22:12.611437  265259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:22:12.318364  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:22:14.318738  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:22:13.110661  265259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:22:13.610755  265259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:22:14.111256  265259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:22:14.610806  265259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:22:15.110885  265259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:22:15.610612  265259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:22:16.111276  265259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:22:16.611348  265259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:22:17.110520  265259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:22:17.610377  265259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:22:16.817958  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:22:18.818500  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:22:21.317777  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:22:18.110696  265259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:22:18.610392  265259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:22:19.110996  265259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:22:19.611138  265259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:22:20.111351  265259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:22:20.610608  265259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:22:21.111055  265259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:22:21.611123  265259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:22:21.738997  265259 kubeadm.go:1067] duration metric: took 12.302271715s to wait for elevateKubeSystemPrivileges.
	I0921 22:22:21.739026  265259 kubeadm.go:398] StartCluster complete in 4m36.622037809s
	I0921 22:22:21.739041  265259 settings.go:142] acquiring lock: {Name:mk2e017b0c75e33bad2b3a546d00214b38a21694 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:22:21.739131  265259 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
	I0921 22:22:21.740483  265259 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig: {Name:mkdec5faf1bb182b50a3cd5458b352f38075dc15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:22:22.256205  265259 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "embed-certs-20220921220439-10174" rescaled to 1
	I0921 22:22:22.256273  265259 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0921 22:22:22.260022  265259 out.go:177] * Verifying Kubernetes components...
	I0921 22:22:22.256319  265259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0921 22:22:22.256360  265259 addons.go:412] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0921 22:22:22.256527  265259 config.go:180] Loaded profile config "embed-certs-20220921220439-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.2
	I0921 22:22:22.261883  265259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0921 22:22:22.261936  265259 addons.go:65] Setting dashboard=true in profile "embed-certs-20220921220439-10174"
	I0921 22:22:22.261954  265259 addons.go:65] Setting metrics-server=true in profile "embed-certs-20220921220439-10174"
	I0921 22:22:22.261957  265259 addons.go:65] Setting default-storageclass=true in profile "embed-certs-20220921220439-10174"
	I0921 22:22:22.261939  265259 addons.go:65] Setting storage-provisioner=true in profile "embed-certs-20220921220439-10174"
	I0921 22:22:22.261968  265259 addons.go:153] Setting addon metrics-server=true in "embed-certs-20220921220439-10174"
	W0921 22:22:22.261978  265259 addons.go:162] addon metrics-server should already be in state true
	I0921 22:22:22.261978  265259 addons.go:153] Setting addon storage-provisioner=true in "embed-certs-20220921220439-10174"
	I0921 22:22:22.261965  265259 addons.go:153] Setting addon dashboard=true in "embed-certs-20220921220439-10174"
	W0921 22:22:22.261991  265259 addons.go:162] addon dashboard should already be in state true
	I0921 22:22:22.261979  265259 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-20220921220439-10174"
	I0921 22:22:22.262027  265259 host.go:66] Checking if "embed-certs-20220921220439-10174" exists ...
	I0921 22:22:22.262047  265259 host.go:66] Checking if "embed-certs-20220921220439-10174" exists ...
	W0921 22:22:22.261992  265259 addons.go:162] addon storage-provisioner should already be in state true
	I0921 22:22:22.262152  265259 host.go:66] Checking if "embed-certs-20220921220439-10174" exists ...
	I0921 22:22:22.262334  265259 cli_runner.go:164] Run: docker container inspect embed-certs-20220921220439-10174 --format={{.State.Status}}
	I0921 22:22:22.262530  265259 cli_runner.go:164] Run: docker container inspect embed-certs-20220921220439-10174 --format={{.State.Status}}
	I0921 22:22:22.262581  265259 cli_runner.go:164] Run: docker container inspect embed-certs-20220921220439-10174 --format={{.State.Status}}
	I0921 22:22:22.262607  265259 cli_runner.go:164] Run: docker container inspect embed-certs-20220921220439-10174 --format={{.State.Status}}
	I0921 22:22:22.275321  265259 node_ready.go:35] waiting up to 6m0s for node "embed-certs-20220921220439-10174" to be "Ready" ...
	I0921 22:22:22.305122  265259 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.6.0
	I0921 22:22:22.302980  265259 addons.go:153] Setting addon default-storageclass=true in "embed-certs-20220921220439-10174"
	I0921 22:22:22.308412  265259 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0921 22:22:22.306819  265259 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W0921 22:22:22.306823  265259 addons.go:162] addon default-storageclass should already be in state true
	I0921 22:22:22.310056  265259 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0921 22:22:22.310063  265259 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0921 22:22:22.311661  265259 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0921 22:22:22.311677  265259 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0921 22:22:22.310084  265259 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0921 22:22:22.313452  265259 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0921 22:22:22.313470  265259 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0921 22:22:22.313520  265259 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220439-10174
	I0921 22:22:22.310090  265259 host.go:66] Checking if "embed-certs-20220921220439-10174" exists ...
	I0921 22:22:22.311776  265259 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220439-10174
	I0921 22:22:22.311792  265259 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220439-10174
	I0921 22:22:22.314077  265259 cli_runner.go:164] Run: docker container inspect embed-certs-20220921220439-10174 --format={{.State.Status}}
	I0921 22:22:22.348685  265259 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0921 22:22:22.348718  265259 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0921 22:22:22.348780  265259 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220439-10174
	I0921 22:22:22.350631  265259 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49428 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/embed-certs-20220921220439-10174/id_rsa Username:docker}
	I0921 22:22:22.350663  265259 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49428 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/embed-certs-20220921220439-10174/id_rsa Username:docker}
	I0921 22:22:22.355060  265259 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49428 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/embed-certs-20220921220439-10174/id_rsa Username:docker}
	I0921 22:22:22.379378  265259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0921 22:22:22.385678  265259 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49428 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/embed-certs-20220921220439-10174/id_rsa Username:docker}
	I0921 22:22:22.494559  265259 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0921 22:22:22.494597  265259 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0921 22:22:22.494751  265259 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0921 22:22:22.494781  265259 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0921 22:22:22.500109  265259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0921 22:22:22.589944  265259 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0921 22:22:22.589985  265259 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0921 22:22:22.592642  265259 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0921 22:22:22.592670  265259 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0921 22:22:22.598738  265259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0921 22:22:22.686188  265259 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0921 22:22:22.686217  265259 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0921 22:22:22.692562  265259 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0921 22:22:22.692589  265259 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0921 22:22:22.777831  265259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0921 22:22:22.795247  265259 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0921 22:22:22.795282  265259 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4206 bytes)
	I0921 22:22:22.886056  265259 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0921 22:22:22.886085  265259 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0921 22:22:22.987013  265259 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0921 22:22:22.987091  265259 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0921 22:22:23.081859  265259 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0921 22:22:23.081893  265259 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0921 22:22:23.177114  265259 start.go:810] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS
	I0921 22:22:23.181203  265259 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0921 22:22:23.181235  265259 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0921 22:22:23.203322  265259 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0921 22:22:23.203354  265259 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0921 22:22:23.284061  265259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0921 22:22:23.584574  265259 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.084417298s)
	I0921 22:22:23.777502  265259 addons.go:383] Verifying addon metrics-server=true in "embed-certs-20220921220439-10174"
	I0921 22:22:24.109571  265259 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0921 22:22:23.817867  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:22:26.317609  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:22:24.111462  265259 addons.go:414] enableAddons completed in 1.8551066s
	I0921 22:22:24.289051  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:22:26.789101  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:22:28.317733  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:22:30.318197  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:22:29.288966  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:22:31.289419  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:22:32.318313  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:22:34.318420  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:22:33.789632  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:22:36.289225  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:22:36.818347  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:22:39.317465  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:22:38.289442  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:22:40.789366  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:22:41.817507  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:22:43.817568  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:22:45.818320  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:22:43.289515  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:22:45.789266  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:22:47.789660  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:22:48.318077  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:22:50.318393  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:22:50.288988  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:22:52.289299  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:22:52.817366  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:22:54.818323  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:22:54.789136  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:22:56.789849  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:22:57.318147  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:22:59.818131  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:22:59.289439  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:23:01.789349  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:23:01.818325  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:23:04.318178  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:23:03.789567  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:23:06.289658  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:23:06.817937  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:23:08.818155  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:23:10.818493  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:23:08.289818  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:23:10.290273  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:23:12.789081  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:23:13.317568  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:23:15.318068  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:23:14.789291  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:23:16.789892  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:23:17.818331  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:23:20.318055  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:23:18.790028  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:23:21.289295  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:23:22.817832  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:23:24.818318  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:23:23.289408  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:23:25.789350  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:23:27.789995  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:23:27.317384  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:23:29.318499  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:23:30.288831  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:23:32.289573  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:23:31.818328  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:23:34.317921  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:23:36.318549  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:23:34.789272  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:23:36.789372  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:23:38.817288  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:23:40.818024  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:23:38.789452  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:23:41.288941  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:23:43.317555  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:23:45.817730  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:23:43.290284  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:23:45.789282  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:23:47.789698  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:23:48.318608  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:23:50.818073  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:23:50.289450  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:23:52.789754  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	1058c41aafbb8       d921cee849482       3 minutes ago       Exited              kindnet-cni               3                   08adfd3bc0694
	e1b3d54125fe2       1c7d8c51823b5       12 minutes ago      Running             kube-proxy                0                   c4181b02eb4c6
	2654f64b12dee       a8a176a5d5d69       12 minutes ago      Running             etcd                      0                   6d751c20960c7
	1bb50adc50b7e       97801f8394908       12 minutes ago      Running             kube-apiserver            0                   bb52b5028f7b8
	9e931b83ea689       dbfceb93c69b6       12 minutes ago      Running             kube-controller-manager   0                   e0dba18ec8f49
	8a3d458869b09       ca0ea1ee3cfd3       12 minutes ago      Running             kube-scheduler            0                   8550ad3b2adb2
	
	* 
	* ==> containerd <==
	* -- Logs begin at Wed 2022-09-21 22:11:26 UTC, end at Wed 2022-09-21 22:23:56 UTC. --
	Sep 21 22:17:14 default-k8s-different-port-20220921221118-10174 containerd[513]: time="2022-09-21T22:17:14.734694599Z" level=warning msg="cleaning up after shim disconnected" id=0cd296271116fe0d43cabed84c5faf298cf1d9b7162daeb2796de31c9e80995d namespace=k8s.io
	Sep 21 22:17:14 default-k8s-different-port-20220921221118-10174 containerd[513]: time="2022-09-21T22:17:14.734716304Z" level=info msg="cleaning up dead shim"
	Sep 21 22:17:14 default-k8s-different-port-20220921221118-10174 containerd[513]: time="2022-09-21T22:17:14.745039485Z" level=warning msg="cleanup warnings time=\"2022-09-21T22:17:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2477 runtime=io.containerd.runc.v2\n"
	Sep 21 22:17:15 default-k8s-different-port-20220921221118-10174 containerd[513]: time="2022-09-21T22:17:15.364213029Z" level=info msg="RemoveContainer for \"2d8abb3e4771063680511a2a2049ed0f4b2e8bae9c8c5229fd30401358a46f3a\""
	Sep 21 22:17:15 default-k8s-different-port-20220921221118-10174 containerd[513]: time="2022-09-21T22:17:15.371604573Z" level=info msg="RemoveContainer for \"2d8abb3e4771063680511a2a2049ed0f4b2e8bae9c8c5229fd30401358a46f3a\" returns successfully"
	Sep 21 22:17:28 default-k8s-different-port-20220921221118-10174 containerd[513]: time="2022-09-21T22:17:28.713179853Z" level=info msg="CreateContainer within sandbox \"08adfd3bc069482c3e3dfc7c5de31fc00c286cec1e929efac6d4107811253a70\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:2,}"
	Sep 21 22:17:28 default-k8s-different-port-20220921221118-10174 containerd[513]: time="2022-09-21T22:17:28.731906456Z" level=info msg="CreateContainer within sandbox \"08adfd3bc069482c3e3dfc7c5de31fc00c286cec1e929efac6d4107811253a70\" for &ContainerMetadata{Name:kindnet-cni,Attempt:2,} returns container id \"d339c4a0d22a20df8b02eea9f9b88582783e8aa76c6d41fcba90e2cf52f1acc7\""
	Sep 21 22:17:28 default-k8s-different-port-20220921221118-10174 containerd[513]: time="2022-09-21T22:17:28.732700732Z" level=info msg="StartContainer for \"d339c4a0d22a20df8b02eea9f9b88582783e8aa76c6d41fcba90e2cf52f1acc7\""
	Sep 21 22:17:28 default-k8s-different-port-20220921221118-10174 containerd[513]: time="2022-09-21T22:17:28.980582670Z" level=info msg="StartContainer for \"d339c4a0d22a20df8b02eea9f9b88582783e8aa76c6d41fcba90e2cf52f1acc7\" returns successfully"
	Sep 21 22:20:09 default-k8s-different-port-20220921221118-10174 containerd[513]: time="2022-09-21T22:20:09.502506566Z" level=info msg="shim disconnected" id=d339c4a0d22a20df8b02eea9f9b88582783e8aa76c6d41fcba90e2cf52f1acc7
	Sep 21 22:20:09 default-k8s-different-port-20220921221118-10174 containerd[513]: time="2022-09-21T22:20:09.502581489Z" level=warning msg="cleaning up after shim disconnected" id=d339c4a0d22a20df8b02eea9f9b88582783e8aa76c6d41fcba90e2cf52f1acc7 namespace=k8s.io
	Sep 21 22:20:09 default-k8s-different-port-20220921221118-10174 containerd[513]: time="2022-09-21T22:20:09.502605699Z" level=info msg="cleaning up dead shim"
	Sep 21 22:20:09 default-k8s-different-port-20220921221118-10174 containerd[513]: time="2022-09-21T22:20:09.514026043Z" level=warning msg="cleanup warnings time=\"2022-09-21T22:20:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2589 runtime=io.containerd.runc.v2\n"
	Sep 21 22:20:09 default-k8s-different-port-20220921221118-10174 containerd[513]: time="2022-09-21T22:20:09.684050023Z" level=info msg="RemoveContainer for \"0cd296271116fe0d43cabed84c5faf298cf1d9b7162daeb2796de31c9e80995d\""
	Sep 21 22:20:09 default-k8s-different-port-20220921221118-10174 containerd[513]: time="2022-09-21T22:20:09.691382521Z" level=info msg="RemoveContainer for \"0cd296271116fe0d43cabed84c5faf298cf1d9b7162daeb2796de31c9e80995d\" returns successfully"
	Sep 21 22:20:37 default-k8s-different-port-20220921221118-10174 containerd[513]: time="2022-09-21T22:20:37.709696848Z" level=info msg="CreateContainer within sandbox \"08adfd3bc069482c3e3dfc7c5de31fc00c286cec1e929efac6d4107811253a70\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:3,}"
	Sep 21 22:20:37 default-k8s-different-port-20220921221118-10174 containerd[513]: time="2022-09-21T22:20:37.722517453Z" level=info msg="CreateContainer within sandbox \"08adfd3bc069482c3e3dfc7c5de31fc00c286cec1e929efac6d4107811253a70\" for &ContainerMetadata{Name:kindnet-cni,Attempt:3,} returns container id \"1058c41aafbb87ce5fc70eca7e064de03a01eb48e2da027b4515730997b03de5\""
	Sep 21 22:20:37 default-k8s-different-port-20220921221118-10174 containerd[513]: time="2022-09-21T22:20:37.723028410Z" level=info msg="StartContainer for \"1058c41aafbb87ce5fc70eca7e064de03a01eb48e2da027b4515730997b03de5\""
	Sep 21 22:20:37 default-k8s-different-port-20220921221118-10174 containerd[513]: time="2022-09-21T22:20:37.803679482Z" level=info msg="StartContainer for \"1058c41aafbb87ce5fc70eca7e064de03a01eb48e2da027b4515730997b03de5\" returns successfully"
	Sep 21 22:23:18 default-k8s-different-port-20220921221118-10174 containerd[513]: time="2022-09-21T22:23:18.322084474Z" level=info msg="shim disconnected" id=1058c41aafbb87ce5fc70eca7e064de03a01eb48e2da027b4515730997b03de5
	Sep 21 22:23:18 default-k8s-different-port-20220921221118-10174 containerd[513]: time="2022-09-21T22:23:18.322150994Z" level=warning msg="cleaning up after shim disconnected" id=1058c41aafbb87ce5fc70eca7e064de03a01eb48e2da027b4515730997b03de5 namespace=k8s.io
	Sep 21 22:23:18 default-k8s-different-port-20220921221118-10174 containerd[513]: time="2022-09-21T22:23:18.322167635Z" level=info msg="cleaning up dead shim"
	Sep 21 22:23:18 default-k8s-different-port-20220921221118-10174 containerd[513]: time="2022-09-21T22:23:18.332704162Z" level=warning msg="cleanup warnings time=\"2022-09-21T22:23:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2706 runtime=io.containerd.runc.v2\n"
	Sep 21 22:23:19 default-k8s-different-port-20220921221118-10174 containerd[513]: time="2022-09-21T22:23:19.036383091Z" level=info msg="RemoveContainer for \"d339c4a0d22a20df8b02eea9f9b88582783e8aa76c6d41fcba90e2cf52f1acc7\""
	Sep 21 22:23:19 default-k8s-different-port-20220921221118-10174 containerd[513]: time="2022-09-21T22:23:19.042716153Z" level=info msg="RemoveContainer for \"d339c4a0d22a20df8b02eea9f9b88582783e8aa76c6d41fcba90e2cf52f1acc7\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-different-port-20220921221118-10174
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-different-port-20220921221118-10174
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=937c68716dfaac5b5ffa3b6655158d5d3472b8c4
	                    minikube.k8s.io/name=default-k8s-different-port-20220921221118-10174
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_09_21T22_11_39_0700
	                    minikube.k8s.io/version=v1.27.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 21 Sep 2022 22:11:35 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-different-port-20220921221118-10174
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 21 Sep 2022 22:23:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 21 Sep 2022 22:22:00 +0000   Wed, 21 Sep 2022 22:11:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 21 Sep 2022 22:22:00 +0000   Wed, 21 Sep 2022 22:11:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 21 Sep 2022 22:22:00 +0000   Wed, 21 Sep 2022 22:11:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 21 Sep 2022 22:22:00 +0000   Wed, 21 Sep 2022 22:11:33 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-different-port-20220921221118-10174
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871748Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871748Ki
	  pods:               110
	System Info:
	  Machine ID:                 40026c506a9948afaae3b0fda0a61c83
	  System UUID:                15db467d-fd65-4163-8719-8617da0ee9c6
	  Boot ID:                    878f3e16-9143-42f3-b848-1a740c7f26bf
	  Kernel Version:             5.15.0-1017-gcp
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.6.8
	  Kubelet Version:            v1.25.2
	  Kube-Proxy Version:         v1.25.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-default-k8s-different-port-20220921221118-10174                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-7wbpp                                                              100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      12m
	  kube-system                 kube-apiserver-default-k8s-different-port-20220921221118-10174             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-default-k8s-different-port-20220921221118-10174    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-lzphc                                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-default-k8s-different-port-20220921221118-10174             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  NodeHasSufficientMemory  12m (x5 over 12m)  kubelet          Node default-k8s-different-port-20220921221118-10174 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x4 over 12m)  kubelet          Node default-k8s-different-port-20220921221118-10174 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x4 over 12m)  kubelet          Node default-k8s-different-port-20220921221118-10174 status is now: NodeHasSufficientPID
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node default-k8s-different-port-20220921221118-10174 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node default-k8s-different-port-20220921221118-10174 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet          Node default-k8s-different-port-20220921221118-10174 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           12m                node-controller  Node default-k8s-different-port-20220921221118-10174 event: Registered Node default-k8s-different-port-20220921221118-10174 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000005] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +2.959863] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.003881] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.023897] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[Sep21 22:10] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.005087] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[Sep21 22:11] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +2.967845] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.031851] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.027935] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +2.943864] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.019893] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.023889] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	
	* 
	* ==> etcd [2654f64b12dee801006bf0c4b82dc6839422d58571d4bc74cee43fe7fdfe9b01] <==
	* {"level":"info","ts":"2022-09-21T22:11:32.803Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-09-21T22:11:32.991Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 1"}
	{"level":"info","ts":"2022-09-21T22:11:32.991Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2022-09-21T22:11:32.991Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2022-09-21T22:11:32.991Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2022-09-21T22:11:32.991Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2022-09-21T22:11:32.991Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2022-09-21T22:11:32.991Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2022-09-21T22:11:32.991Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-21T22:11:32.992Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-21T22:11:32.992Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-21T22:11:32.992Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:default-k8s-different-port-20220921221118-10174 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2022-09-21T22:11:32.992Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-21T22:11:32.992Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-09-21T22:11:32.992Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-09-21T22:11:32.992Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-09-21T22:11:32.992Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-09-21T22:11:32.994Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-09-21T22:11:32.994Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2022-09-21T22:17:27.148Z","caller":"traceutil/trace.go:171","msg":"trace[81568497] linearizableReadLoop","detail":"{readStateIndex:557; appliedIndex:557; }","duration":"144.279554ms","start":"2022-09-21T22:17:27.003Z","end":"2022-09-21T22:17:27.148Z","steps":["trace[81568497] 'read index received'  (duration: 144.269717ms)","trace[81568497] 'applied index is now lower than readState.Index'  (duration: 8.442µs)"],"step_count":2}
	{"level":"warn","ts":"2022-09-21T22:17:27.186Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"182.102976ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/default\" ","response":"range_response_count:1 size:341"}
	{"level":"info","ts":"2022-09-21T22:17:27.186Z","caller":"traceutil/trace.go:171","msg":"trace[4217368] range","detail":"{range_begin:/registry/namespaces/default; range_end:; response_count:1; response_revision:473; }","duration":"182.219248ms","start":"2022-09-21T22:17:27.003Z","end":"2022-09-21T22:17:27.186Z","steps":["trace[4217368] 'agreement among raft nodes before linearized reading'  (duration: 144.455002ms)","trace[4217368] 'range keys from in-memory index tree'  (duration: 37.603398ms)"],"step_count":2}
	{"level":"info","ts":"2022-09-21T22:17:57.177Z","caller":"traceutil/trace.go:171","msg":"trace[1122206554] transaction","detail":"{read_only:false; response_revision:485; number_of_response:1; }","duration":"166.694194ms","start":"2022-09-21T22:17:57.010Z","end":"2022-09-21T22:17:57.177Z","steps":["trace[1122206554] 'process raft request'  (duration: 75.971431ms)","trace[1122206554] 'compare'  (duration: 90.577282ms)"],"step_count":2}
	{"level":"info","ts":"2022-09-21T22:21:33.333Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":456}
	{"level":"info","ts":"2022-09-21T22:21:33.334Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":456,"took":"398.898µs"}
	
	* 
	* ==> kernel <==
	*  22:23:56 up  1:06,  0 users,  load average: 1.07, 1.32, 1.77
	Linux default-k8s-different-port-20220921221118-10174 5.15.0-1017-gcp #23~20.04.2-Ubuntu SMP Wed Aug 17 02:46:40 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kube-apiserver [1bb50adc50b7e339b9e7fee763126f6f17a621dcc78637bab78187b50e68bbf2] <==
	* I0921 22:11:35.575804       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0921 22:11:35.575972       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0921 22:11:35.576041       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0921 22:11:35.576424       1 apf_controller.go:305] Running API Priority and Fairness config worker
	I0921 22:11:35.576630       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0921 22:11:35.576649       1 cache.go:39] Caches are synced for autoregister controller
	I0921 22:11:35.592140       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0921 22:11:35.599285       1 controller.go:616] quota admission added evaluator for: namespaces
	I0921 22:11:36.242080       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0921 22:11:36.463042       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0921 22:11:36.466360       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0921 22:11:36.466387       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0921 22:11:36.848899       1 controller.go:616] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0921 22:11:36.897461       1 controller.go:616] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0921 22:11:36.989327       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0921 22:11:36.994282       1 lease.go:250] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I0921 22:11:36.995247       1 controller.go:616] quota admission added evaluator for: endpoints
	I0921 22:11:36.999018       1 controller.go:616] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0921 22:11:37.513301       1 controller.go:616] quota admission added evaluator for: serviceaccounts
	I0921 22:11:38.547983       1 controller.go:616] quota admission added evaluator for: deployments.apps
	I0921 22:11:38.554911       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0921 22:11:38.562442       1 controller.go:616] quota admission added evaluator for: daemonsets.apps
	I0921 22:11:38.625519       1 controller.go:616] quota admission added evaluator for: leases.coordination.k8s.io
	I0921 22:11:51.719506       1 controller.go:616] quota admission added evaluator for: replicasets.apps
	I0921 22:11:51.768557       1 controller.go:616] quota admission added evaluator for: controllerrevisions.apps
	
	* 
	* ==> kube-controller-manager [9e931b83ea689a63f7a3861c1e0f7076f722e68890221c6209b05b3b852c12d7] <==
	* I0921 22:11:50.915896       1 shared_informer.go:262] Caches are synced for ephemeral
	I0921 22:11:50.915920       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0921 22:11:50.915920       1 shared_informer.go:262] Caches are synced for HPA
	I0921 22:11:50.915965       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0921 22:11:50.915969       1 shared_informer.go:262] Caches are synced for daemon sets
	I0921 22:11:50.916011       1 shared_informer.go:262] Caches are synced for deployment
	I0921 22:11:50.916180       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
	I0921 22:11:50.916336       1 shared_informer.go:262] Caches are synced for job
	I0921 22:11:50.916393       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0921 22:11:50.916669       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I0921 22:11:50.919495       1 shared_informer.go:262] Caches are synced for cronjob
	I0921 22:11:50.920490       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I0921 22:11:51.021946       1 shared_informer.go:262] Caches are synced for resource quota
	I0921 22:11:51.044665       1 shared_informer.go:262] Caches are synced for attach detach
	I0921 22:11:51.072661       1 shared_informer.go:262] Caches are synced for resource quota
	I0921 22:11:51.479876       1 shared_informer.go:262] Caches are synced for garbage collector
	I0921 22:11:51.515801       1 shared_informer.go:262] Caches are synced for garbage collector
	I0921 22:11:51.515827       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0921 22:11:51.721293       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-565d847f94 to 2"
	I0921 22:11:51.774170       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-lzphc"
	I0921 22:11:51.776223       1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-7wbpp"
	I0921 22:11:51.913709       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-565d847f94 to 1 from 2"
	I0921 22:11:51.921133       1 event.go:294] "Event occurred" object="kube-system/coredns-565d847f94" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-565d847f94-hhmh6"
	I0921 22:11:51.926325       1 event.go:294] "Event occurred" object="kube-system/coredns-565d847f94" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-565d847f94-mrkjn"
	I0921 22:11:51.984101       1 event.go:294] "Event occurred" object="kube-system/coredns-565d847f94" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-565d847f94-hhmh6"
	
	* 
	* ==> kube-proxy [e1b3d54125fe2aff15e0a996ae0be62597db4065ff9a9d41a3b0e598b97b1608] <==
	* I0921 22:11:52.320700       1 node.go:163] Successfully retrieved node IP: 192.168.85.2
	I0921 22:11:52.320772       1 server_others.go:138] "Detected node IP" address="192.168.85.2"
	I0921 22:11:52.320813       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0921 22:11:52.340612       1 server_others.go:206] "Using iptables Proxier"
	I0921 22:11:52.340647       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0921 22:11:52.340656       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0921 22:11:52.340676       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0921 22:11:52.340703       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0921 22:11:52.340862       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0921 22:11:52.341069       1 server.go:661] "Version info" version="v1.25.2"
	I0921 22:11:52.341099       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0921 22:11:52.341713       1 config.go:226] "Starting endpoint slice config controller"
	I0921 22:11:52.341738       1 config.go:317] "Starting service config controller"
	I0921 22:11:52.341749       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0921 22:11:52.341752       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0921 22:11:52.341786       1 config.go:444] "Starting node config controller"
	I0921 22:11:52.341804       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0921 22:11:52.442259       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0921 22:11:52.442300       1 shared_informer.go:262] Caches are synced for service config
	I0921 22:11:52.442317       1 shared_informer.go:262] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [8a3d458869b0951d2fe3dd93e9d05a549b058f95f84b2ad0685292ebb3428767] <==
	* E0921 22:11:35.585266       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0921 22:11:35.585270       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0921 22:11:35.585278       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0921 22:11:35.585375       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0921 22:11:35.585400       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0921 22:11:35.585404       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0921 22:11:35.585402       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0921 22:11:35.585415       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0921 22:11:35.585422       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0921 22:11:35.585423       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0921 22:11:35.585324       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0921 22:11:35.585438       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0921 22:11:36.465124       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0921 22:11:36.465238       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0921 22:11:36.477413       1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0921 22:11:36.477473       1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0921 22:11:36.492805       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0921 22:11:36.492841       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0921 22:11:36.501968       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0921 22:11:36.502004       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0921 22:11:36.596856       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0921 22:11:36.596889       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0921 22:11:36.676392       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0921 22:11:36.676437       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0921 22:11:38.681580       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2022-09-21 22:11:26 UTC, end at Wed 2022-09-21 22:23:56 UTC. --
	Sep 21 22:22:39 default-k8s-different-port-20220921221118-10174 kubelet[1301]: E0921 22:22:39.073373    1301 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:22:44 default-k8s-different-port-20220921221118-10174 kubelet[1301]: E0921 22:22:44.074085    1301 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:22:49 default-k8s-different-port-20220921221118-10174 kubelet[1301]: E0921 22:22:49.075082    1301 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:22:54 default-k8s-different-port-20220921221118-10174 kubelet[1301]: E0921 22:22:54.076474    1301 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:22:59 default-k8s-different-port-20220921221118-10174 kubelet[1301]: E0921 22:22:59.077808    1301 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:23:04 default-k8s-different-port-20220921221118-10174 kubelet[1301]: E0921 22:23:04.079377    1301 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:23:09 default-k8s-different-port-20220921221118-10174 kubelet[1301]: E0921 22:23:09.080738    1301 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:23:14 default-k8s-different-port-20220921221118-10174 kubelet[1301]: E0921 22:23:14.082350    1301 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:23:19 default-k8s-different-port-20220921221118-10174 kubelet[1301]: I0921 22:23:19.035161    1301 scope.go:115] "RemoveContainer" containerID="d339c4a0d22a20df8b02eea9f9b88582783e8aa76c6d41fcba90e2cf52f1acc7"
	Sep 21 22:23:19 default-k8s-different-port-20220921221118-10174 kubelet[1301]: I0921 22:23:19.035501    1301 scope.go:115] "RemoveContainer" containerID="1058c41aafbb87ce5fc70eca7e064de03a01eb48e2da027b4515730997b03de5"
	Sep 21 22:23:19 default-k8s-different-port-20220921221118-10174 kubelet[1301]: E0921 22:23:19.035946    1301 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-7wbpp_kube-system(3f16ae0b-2f66-4f1e-b234-74570472a7f8)\"" pod="kube-system/kindnet-7wbpp" podUID=3f16ae0b-2f66-4f1e-b234-74570472a7f8
	Sep 21 22:23:19 default-k8s-different-port-20220921221118-10174 kubelet[1301]: E0921 22:23:19.084090    1301 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:23:24 default-k8s-different-port-20220921221118-10174 kubelet[1301]: E0921 22:23:24.085039    1301 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:23:29 default-k8s-different-port-20220921221118-10174 kubelet[1301]: E0921 22:23:29.085747    1301 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:23:29 default-k8s-different-port-20220921221118-10174 kubelet[1301]: I0921 22:23:29.706721    1301 scope.go:115] "RemoveContainer" containerID="1058c41aafbb87ce5fc70eca7e064de03a01eb48e2da027b4515730997b03de5"
	Sep 21 22:23:29 default-k8s-different-port-20220921221118-10174 kubelet[1301]: E0921 22:23:29.707061    1301 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-7wbpp_kube-system(3f16ae0b-2f66-4f1e-b234-74570472a7f8)\"" pod="kube-system/kindnet-7wbpp" podUID=3f16ae0b-2f66-4f1e-b234-74570472a7f8
	Sep 21 22:23:34 default-k8s-different-port-20220921221118-10174 kubelet[1301]: E0921 22:23:34.086774    1301 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:23:39 default-k8s-different-port-20220921221118-10174 kubelet[1301]: E0921 22:23:39.088167    1301 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:23:41 default-k8s-different-port-20220921221118-10174 kubelet[1301]: I0921 22:23:41.706688    1301 scope.go:115] "RemoveContainer" containerID="1058c41aafbb87ce5fc70eca7e064de03a01eb48e2da027b4515730997b03de5"
	Sep 21 22:23:41 default-k8s-different-port-20220921221118-10174 kubelet[1301]: E0921 22:23:41.706965    1301 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-7wbpp_kube-system(3f16ae0b-2f66-4f1e-b234-74570472a7f8)\"" pod="kube-system/kindnet-7wbpp" podUID=3f16ae0b-2f66-4f1e-b234-74570472a7f8
	Sep 21 22:23:44 default-k8s-different-port-20220921221118-10174 kubelet[1301]: E0921 22:23:44.089557    1301 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:23:49 default-k8s-different-port-20220921221118-10174 kubelet[1301]: E0921 22:23:49.090437    1301 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:23:53 default-k8s-different-port-20220921221118-10174 kubelet[1301]: I0921 22:23:53.706819    1301 scope.go:115] "RemoveContainer" containerID="1058c41aafbb87ce5fc70eca7e064de03a01eb48e2da027b4515730997b03de5"
	Sep 21 22:23:53 default-k8s-different-port-20220921221118-10174 kubelet[1301]: E0921 22:23:53.707078    1301 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-7wbpp_kube-system(3f16ae0b-2f66-4f1e-b234-74570472a7f8)\"" pod="kube-system/kindnet-7wbpp" podUID=3f16ae0b-2f66-4f1e-b234-74570472a7f8
	Sep 21 22:23:54 default-k8s-different-port-20220921221118-10174 kubelet[1301]: E0921 22:23:54.092015    1301 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220921221118-10174 -n default-k8s-different-port-20220921221118-10174
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-different-port-20220921221118-10174 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: busybox coredns-565d847f94-mrkjn storage-provisioner
helpers_test.go:272: ======> post-mortem[TestStartStop/group/default-k8s-different-port/serial/DeployApp]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context default-k8s-different-port-20220921221118-10174 describe pod busybox coredns-565d847f94-mrkjn storage-provisioner
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20220921221118-10174 describe pod busybox coredns-565d847f94-mrkjn storage-provisioner: exit status 1 (65.14947ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wppsb (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-wppsb:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                   From               Message
	  ----     ------            ----                  ----               -------
	  Warning  FailedScheduling  2m48s (x2 over 8m3s)  default-scheduler  0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "coredns-565d847f94-mrkjn" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context default-k8s-different-port-20220921221118-10174 describe pod busybox coredns-565d847f94-mrkjn storage-provisioner: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220921221118-10174
helpers_test.go:235: (dbg) docker inspect default-k8s-different-port-20220921221118-10174:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "37728b19138a640b956531d5c576658820e978bb8c95c5dc17cd7f348fdb8112",
	        "Created": "2022-09-21T22:11:25.759772693Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 251802,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-09-21T22:11:26.140466749Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f58fddaff4349397c9f51a6b73926a9b118af22b4ccb4492e84c74d0b59dcd4",
	        "ResolvConfPath": "/var/lib/docker/containers/37728b19138a640b956531d5c576658820e978bb8c95c5dc17cd7f348fdb8112/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/37728b19138a640b956531d5c576658820e978bb8c95c5dc17cd7f348fdb8112/hostname",
	        "HostsPath": "/var/lib/docker/containers/37728b19138a640b956531d5c576658820e978bb8c95c5dc17cd7f348fdb8112/hosts",
	        "LogPath": "/var/lib/docker/containers/37728b19138a640b956531d5c576658820e978bb8c95c5dc17cd7f348fdb8112/37728b19138a640b956531d5c576658820e978bb8c95c5dc17cd7f348fdb8112-json.log",
	        "Name": "/default-k8s-different-port-20220921221118-10174",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-different-port-20220921221118-10174:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-different-port-20220921221118-10174",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a8a0164abb3fd782df784bf984b10c980312d3270f36e5612924a87806ba5a19-init/diff:/var/lib/docker/overlay2/e464f9a636cb97832b8aa5685163afb26b14d037782877c3831e0f261b6fb12b/diff:/var/lib/docker/overlay2/6f2f230003f0711dfcff3390931e14f9abd39be1cd2b674079cb6a0fc99dab7f/diff:/var/lib/docker/overlay2/a539506727a1471a83b57694b6923a137699d9d47fe00601ffa27c34d63d17c2/diff:/var/lib/docker/overlay2/7dc48fda616494e9959c05191521a1f5bdf6be0555a35fc1d7ce85b87a0522ad/diff:/var/lib/docker/overlay2/6a1c06da85f8c03c9712693ca8a75d804e7b9f135a6b92e3929385cc791b0b3a/diff:/var/lib/docker/overlay2/d178ee6cf0c027c7943422dc71b38eb93ee131daede423f6cf283424eaebc517/diff:/var/lib/docker/overlay2/f4c8d81a89934a29db99d9ed4b4542a8882465ab2b6419ba4a3fe09214d25d9e/diff:/var/lib/docker/overlay2/4411575e37772974371f716b7c4abb3053138ddcaf53d883eb3116b4c3a37f78/diff:/var/lib/docker/overlay2/a3f8db0c0d790efcede2f41e470d293e280b022ff0bbb46c8bc25f190e3146fc/diff:/var/lib/docker/overlay2/a5bfd3
703378d6d5b2622598a13b4b2eb9203659c1ffff9aa93944ef8d20d22c/diff:/var/lib/docker/overlay2/3c24c9c8a37c1a488d12209df991b3857ccbc448a3329e46a8d48fc28ef81b83/diff:/var/lib/docker/overlay2/199771963b33bc809e912a6d428c556897f0ba04db2268e08695cb7ce0ee3bad/diff:/var/lib/docker/overlay2/4bc13570e456a5d261fbaf68140f2e5b2ea10c6683623858e16ee3ed1b117ef7/diff:/var/lib/docker/overlay2/9831fdfd44eff7e3f4780d10f5480f090b388735bb188e7a1e93251b7769d8f3/diff:/var/lib/docker/overlay2/28126fe31ddfbf99d4588c5ac750503b9b6cee8f2e00781941cff5f4323d1c76/diff:/var/lib/docker/overlay2/173221e8ff5f6ec22870429dc8bb01272ade638fe47275d1daa1ce07952c8c8c/diff:/var/lib/docker/overlay2/53bc2e4fa25c7c694765cf4d57e59552b26eed732ba59b8d74b221ea0ada7479/diff:/var/lib/docker/overlay2/0175a1cb7e1c92247b6cbac270da49d811d3508e193c1c2e47bef8cb769a47bf/diff:/var/lib/docker/overlay2/1651afeb98cf83af553f3e0a4dcb564af84cb440147702fddc0133cdae08e879/diff:/var/lib/docker/overlay2/64c0563be8f8902fe0e9d207f839be98da0e23d6d839403a7e5498f0468e6b75/diff:/var/lib/d
ocker/overlay2/c9ce1f7c22e681e0e279ce7656144b4c75e2d24f7297acad69e621f2756b75e8/diff:/var/lib/docker/overlay2/469e6b8bbc078383af3dc64c254906774ab56a833c4de1dd39b46bfa27fee3b0/diff:/var/lib/docker/overlay2/797536b923ada4261904843a51188063c401089eae5fec971f1dfb3c21d000a8/diff:/var/lib/docker/overlay2/9d5cbab97d75347795fc5814806362514e12ffc998b48411ecf1d23f980badf6/diff:/var/lib/docker/overlay2/8681f41e3df379cfdc8fd52b2e5837512b8e36a64c87fb159548a876d16e691d/diff:/var/lib/docker/overlay2/78ea1917a0aca968f65d796cbc2ee95d7d190a700496b9e47b29884fe13b1bec/diff:/var/lib/docker/overlay2/c3c231a74b7992f1fdc04b623c10d7791eab4fd97c803ed2610d595097c682c2/diff:/var/lib/docker/overlay2/f7b38a2a87d3958318f3a4b244e33d8096cd001a8fcb916eec022b0679382900/diff:/var/lib/docker/overlay2/1107be3397c1dfc460decaf307982d427cc431beb7938fb0e78f4f7cbc0afd3b/diff:/var/lib/docker/overlay2/59319d0c4cc381deff6e51ba1cc718b7cfd4fc557e74b8e83bd228afca501c8c/diff:/var/lib/docker/overlay2/9d031408a1ab9825246b7bba81db2596cb92e70fbedc70fba9be1d25320
8529f/diff:/var/lib/docker/overlay2/5800b717c814431baf3cc17520b099e7e011915cdeb46ed6360ec2f831a926ec/diff:/var/lib/docker/overlay2/6660d8dfc6dbb7f41f871ff7ec7e0fdfdb2ca3480141cdd4cce92869cb5382bf/diff:/var/lib/docker/overlay2/525a566b79110ff18e78880f2652b8d9cfcfbdbf3272fedb650933fcbbf869bd/diff:/var/lib/docker/overlay2/a0730aa8e0952fc661048d3a7f986ff246a5b2a29ad4e8195970bb407e3cecfc/diff:/var/lib/docker/overlay2/f9d25765ae914f5f013b5f46292f03c39d82f675d2102904f5fabecf07189337/diff:/var/lib/docker/overlay2/b7b34d126964ff60bc0fb625da7343e53e8bb4e77d5e72e33257c28b3d395ca3/diff:/var/lib/docker/overlay2/8514d6cc3775ebdd86f683a78503419bc97346c0059ac0d48563d2edec9727f0/diff:/var/lib/docker/overlay2/dd6f7bead5718d42040b4ee90ce09fa15720d04f3c58e2964b1d5b241bf5b7ed/diff:/var/lib/docker/overlay2/1a8adeb0355a42e9284e241b527930f4f6b7108d459c10550d5d0c4451f9e924/diff:/var/lib/docker/overlay2/703991449e0070d93945e4435da56144ce54e461e877e0b084144663d46b10f6/diff:/var/lib/docker/overlay2/65855711cfa445d374537e6268fd1bea2f62ea
bf2f590842933254b4755050dd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a8a0164abb3fd782df784bf984b10c980312d3270f36e5612924a87806ba5a19/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a8a0164abb3fd782df784bf984b10c980312d3270f36e5612924a87806ba5a19/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a8a0164abb3fd782df784bf984b10c980312d3270f36e5612924a87806ba5a19/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-different-port-20220921221118-10174",
	                "Source": "/var/lib/docker/volumes/default-k8s-different-port-20220921221118-10174/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-different-port-20220921221118-10174",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-different-port-20220921221118-10174",
	                "name.minikube.sigs.k8s.io": "default-k8s-different-port-20220921221118-10174",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2c16cce9402b8d39506117583a7fad80a94710d15dab294e1374d69074b6b894",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49418"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49417"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49414"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49416"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49415"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/2c16cce9402b",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-different-port-20220921221118-10174": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "37728b19138a",
	                        "default-k8s-different-port-20220921221118-10174"
	                    ],
	                    "NetworkID": "e093ea2ee154cf6d0e5d3b4a191700b36287f8ecd49e1b54f684a8f299ea6b79",
	                    "EndpointID": "adb7408d4c9675e8a8c7221c5c44296bade020a1fef2417db2c78e1b8536881c",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220921221118-10174 -n default-k8s-different-port-20220921221118-10174
helpers_test.go:244: <<< TestStartStop/group/default-k8s-different-port/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/DeployApp]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-different-port-20220921221118-10174 logs -n 25
helpers_test.go:252: TestStartStop/group/default-k8s-different-port/serial/DeployApp logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| Command |                            Args                            |                     Profile                     |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p                                                         | enable-default-cni-20220921215523-10174         | jenkins | v1.27.0 | 21 Sep 22 22:11 UTC | 21 Sep 22 22:11 UTC |
	|         | enable-default-cni-20220921215523-10174                    |                                                 |         |         |                     |                     |
	| start   | -p                                                         | default-k8s-different-port-20220921221118-10174 | jenkins | v1.27.0 | 21 Sep 22 22:11 UTC |                     |
	|         | default-k8s-different-port-20220921221118-10174            |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true                |                                                 |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker                      |                                                 |         |         |                     |                     |
	|         |  --container-runtime=containerd                            |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.2                               |                                                 |         |         |                     |                     |
	| ssh     | -p                                                         | old-k8s-version-20220921220722-10174            | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | old-k8s-version-20220921220722-10174                       |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                     |                     |
	| pause   | -p                                                         | old-k8s-version-20220921220722-10174            | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | old-k8s-version-20220921220722-10174                       |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| unpause | -p                                                         | old-k8s-version-20220921220722-10174            | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | old-k8s-version-20220921220722-10174                       |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | old-k8s-version-20220921220722-10174            | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | old-k8s-version-20220921220722-10174                       |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | old-k8s-version-20220921220722-10174            | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | old-k8s-version-20220921220722-10174                       |                                                 |         |         |                     |                     |
	| start   | -p newest-cni-20220921221720-10174 --memory=2200           | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.2                               |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | embed-certs-20220921220439-10174                | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | embed-certs-20220921220439-10174                           |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                     |                     |
	| stop    | -p                                                         | embed-certs-20220921220439-10174                | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | embed-certs-20220921220439-10174                           |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | embed-certs-20220921220439-10174                | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | embed-certs-20220921220439-10174                           |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                     |                     |
	| start   | -p                                                         | embed-certs-20220921220439-10174                | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC |                     |
	|         | embed-certs-20220921220439-10174                           |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                     |                     |
	|         | --wait=true --embed-certs                                  |                                                 |         |         |                     |                     |
	|         | --driver=docker                                            |                                                 |         |         |                     |                     |
	|         | --container-runtime=containerd                             |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.2                               |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | newest-cni-20220921221720-10174                            |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                     |                     |
	| stop    | -p                                                         | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | newest-cni-20220921221720-10174                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | newest-cni-20220921221720-10174                            |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                     |                     |
	| start   | -p newest-cni-20220921221720-10174 --memory=2200           | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:18 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.2                               |                                                 |         |         |                     |                     |
	| ssh     | -p                                                         | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:18 UTC | 21 Sep 22 22:18 UTC |
	|         | newest-cni-20220921221720-10174                            |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                     |                     |
	| pause   | -p                                                         | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:18 UTC | 21 Sep 22 22:18 UTC |
	|         | newest-cni-20220921221720-10174                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| unpause | -p                                                         | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:18 UTC | 21 Sep 22 22:18 UTC |
	|         | newest-cni-20220921221720-10174                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:18 UTC | 21 Sep 22 22:18 UTC |
	|         | newest-cni-20220921221720-10174                            |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:18 UTC | 21 Sep 22 22:18 UTC |
	|         | newest-cni-20220921221720-10174                            |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | no-preload-20220921220832-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:21 UTC | 21 Sep 22 22:21 UTC |
	|         | no-preload-20220921220832-10174                            |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                     |                     |
	| stop    | -p                                                         | no-preload-20220921220832-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:21 UTC | 21 Sep 22 22:21 UTC |
	|         | no-preload-20220921220832-10174                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | no-preload-20220921220832-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:21 UTC | 21 Sep 22 22:21 UTC |
	|         | no-preload-20220921220832-10174                            |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                     |                     |
	| start   | -p                                                         | no-preload-20220921220832-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:21 UTC |                     |
	|         | no-preload-20220921220832-10174                            |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                     |                     |
	|         | --wait=true --preload=false                                |                                                 |         |         |                     |                     |
	|         | --driver=docker                                            |                                                 |         |         |                     |                     |
	|         | --container-runtime=containerd                             |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.2                               |                                                 |         |         |                     |                     |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/09/21 22:21:21
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.19.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0921 22:21:21.729027  276511 out.go:296] Setting OutFile to fd 1 ...
	I0921 22:21:21.729174  276511 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:21:21.729189  276511 out.go:309] Setting ErrFile to fd 2...
	I0921 22:21:21.729194  276511 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:21:21.729308  276511 root.go:333] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/bin
	I0921 22:21:21.729870  276511 out.go:303] Setting JSON to false
	I0921 22:21:21.731566  276511 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3833,"bootTime":1663795049,"procs":716,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1017-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0921 22:21:21.731629  276511 start.go:125] virtualization: kvm guest
	I0921 22:21:21.734495  276511 out.go:177] * [no-preload-20220921220832-10174] minikube v1.27.0 on Ubuntu 20.04 (kvm/amd64)
	I0921 22:21:21.736412  276511 notify.go:214] Checking for updates...
	I0921 22:21:21.737826  276511 out.go:177]   - MINIKUBE_LOCATION=14995
	I0921 22:21:21.739371  276511 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0921 22:21:21.740848  276511 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
	I0921 22:21:21.742164  276511 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube
	I0921 22:21:21.743463  276511 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0921 22:21:21.745159  276511 config.go:180] Loaded profile config "no-preload-20220921220832-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.2
	I0921 22:21:21.745572  276511 driver.go:365] Setting default libvirt URI to qemu:///system
	I0921 22:21:21.776785  276511 docker.go:137] docker version: linux-20.10.18
	I0921 22:21:21.776874  276511 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:21:21.873005  276511 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2022-09-21 22:21:21.797949632 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1017-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.18 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0921 22:21:21.873105  276511 docker.go:254] overlay module found
	I0921 22:21:21.875489  276511 out.go:177] * Using the docker driver based on existing profile
	I0921 22:21:21.876982  276511 start.go:284] selected driver: docker
	I0921 22:21:21.877000  276511 start.go:808] validating driver "docker" against &{Name:no-preload-20220921220832-10174 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:no-preload-20220921220832-10174 Namespace:default APIServerName:mi
nikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-hos
t Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 22:21:21.877104  276511 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0921 22:21:21.877949  276511 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:21:21.972195  276511 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2022-09-21 22:21:21.898685177 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1017-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.18 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0921 22:21:21.972596  276511 start_flags.go:867] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0921 22:21:21.972625  276511 cni.go:95] Creating CNI manager for ""
	I0921 22:21:21.972634  276511 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0921 22:21:21.972657  276511 start_flags.go:316] config:
	{Name:no-preload-20220921220832-10174 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:no-preload-20220921220832-10174 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 22:21:21.975206  276511 out.go:177] * Starting control plane node no-preload-20220921220832-10174 in cluster no-preload-20220921220832-10174
	I0921 22:21:21.976541  276511 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0921 22:21:21.978261  276511 out.go:177] * Pulling base image ...
	I0921 22:21:21.979898  276511 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime containerd
	I0921 22:21:21.980011  276511 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	I0921 22:21:21.980055  276511 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/no-preload-20220921220832-10174/config.json ...
	I0921 22:21:21.980230  276511 cache.go:107] acquiring lock: {Name:mk964a2e66a5444defeab854e6434a6f27bdb527 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:21:21.980240  276511 cache.go:107] acquiring lock: {Name:mka10a341c76ae214d12cf65b1bbb970ff641c5f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:21:21.980291  276511 cache.go:107] acquiring lock: {Name:mkb5c943b9da9e6c7ecc443b377ab990272f1b2f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:21:21.980336  276511 cache.go:107] acquiring lock: {Name:mk944562b9b2415f3d8e7ad36b373f92205bdb5c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:21:21.980366  276511 cache.go:107] acquiring lock: {Name:mk6ae321142fb89935897137e30217f9ae2499ae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:21:21.980402  276511 cache.go:107] acquiring lock: {Name:mk0eb3fbf1ee9e76ad78bfdee22277edae17ed2a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:21:21.980366  276511 cache.go:107] acquiring lock: {Name:mk4fab6516978f221b8246a61f380f8ab97f066c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:21:21.980335  276511 cache.go:107] acquiring lock: {Name:mkee4799116b59e3f65d0127cdad0c25a01a05e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:21:21.980556  276511 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.25.2 exists
	I0921 22:21:21.980581  276511 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.9.3 exists
	I0921 22:21:21.980559  276511 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.25.2 exists
	I0921 22:21:21.980583  276511 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0921 22:21:21.980592  276511 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.25.2" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.25.2" took 362.285µs
	I0921 22:21:21.980608  276511 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.25.2 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.25.2 succeeded
	I0921 22:21:21.980603  276511 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.9.3" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.9.3" took 272.508µs
	I0921 22:21:21.980617  276511 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.9.3 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.9.3 succeeded
	I0921 22:21:21.980614  276511 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 397.033µs
	I0921 22:21:21.980610  276511 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.25.2" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.25.2" took 300.17µs
	I0921 22:21:21.980625  276511 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0921 22:21:21.980629  276511 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.25.2 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.25.2 succeeded
	I0921 22:21:21.980647  276511 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.25.2 exists
	I0921 22:21:21.980673  276511 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.25.2" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.25.2" took 420.957µs
	I0921 22:21:21.980689  276511 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.25.2 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.25.2 succeeded
	I0921 22:21:21.980713  276511 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.4-0 exists
	I0921 22:21:21.980730  276511 cache.go:96] cache image "registry.k8s.io/etcd:3.5.4-0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.4-0" took 401.678µs
	I0921 22:21:21.980744  276511 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.4-0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.4-0 succeeded
	I0921 22:21:21.980757  276511 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/pause_3.8 exists
	I0921 22:21:21.980790  276511 cache.go:96] cache image "registry.k8s.io/pause:3.8" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/pause_3.8" took 470.77µs
	I0921 22:21:21.980807  276511 cache.go:80] save to tar file registry.k8s.io/pause:3.8 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/pause_3.8 succeeded
	I0921 22:21:21.980833  276511 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.25.2 exists
	I0921 22:21:21.980848  276511 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.25.2" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.25.2" took 492.866µs
	I0921 22:21:21.980861  276511 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.25.2 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.25.2 succeeded
	I0921 22:21:21.980876  276511 cache.go:87] Successfully saved all images to host disk.
	I0921 22:21:22.004613  276511 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon, skipping pull
	I0921 22:21:22.004656  276511 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c exists in daemon, skipping load
	I0921 22:21:22.004676  276511 cache.go:208] Successfully downloaded all kic artifacts
	I0921 22:21:22.004708  276511 start.go:364] acquiring machines lock for no-preload-20220921220832-10174: {Name:mk189db360f5ac486cb35206c34214af6d1c65b0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:21:22.004793  276511 start.go:368] acquired machines lock for "no-preload-20220921220832-10174" in 64.56µs
	I0921 22:21:22.004813  276511 start.go:96] Skipping create...Using existing machine configuration
	I0921 22:21:22.004818  276511 fix.go:55] fixHost starting: 
	I0921 22:21:22.005039  276511 cli_runner.go:164] Run: docker container inspect no-preload-20220921220832-10174 --format={{.State.Status}}
	I0921 22:21:22.028746  276511 fix.go:103] recreateIfNeeded on no-preload-20220921220832-10174: state=Stopped err=<nil>
	W0921 22:21:22.028785  276511 fix.go:129] unexpected machine state, will restart: <nil>
	I0921 22:21:22.031134  276511 out.go:177] * Restarting existing docker container for "no-preload-20220921220832-10174" ...
	I0921 22:21:19.977941  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:21:22.477413  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:21:22.032731  276511 cli_runner.go:164] Run: docker start no-preload-20220921220832-10174
	I0921 22:21:22.397294  276511 cli_runner.go:164] Run: docker container inspect no-preload-20220921220832-10174 --format={{.State.Status}}
	I0921 22:21:22.425241  276511 kic.go:415] container "no-preload-20220921220832-10174" state is running.
	I0921 22:21:22.425628  276511 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20220921220832-10174
	I0921 22:21:22.452469  276511 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/no-preload-20220921220832-10174/config.json ...
	I0921 22:21:22.452688  276511 machine.go:88] provisioning docker machine ...
	I0921 22:21:22.452713  276511 ubuntu.go:169] provisioning hostname "no-preload-20220921220832-10174"
	I0921 22:21:22.452750  276511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220832-10174
	I0921 22:21:22.481744  276511 main.go:134] libmachine: Using SSH client type: native
	I0921 22:21:22.481925  276511 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ec520] 0x7ef6a0 <nil>  [] 0s} 127.0.0.1 49438 <nil> <nil>}
	I0921 22:21:22.481949  276511 main.go:134] libmachine: About to run SSH command:
	sudo hostname no-preload-20220921220832-10174 && echo "no-preload-20220921220832-10174" | sudo tee /etc/hostname
	I0921 22:21:22.482598  276511 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35926->127.0.0.1:49438: read: connection reset by peer
	I0921 22:21:25.619844  276511 main.go:134] libmachine: SSH cmd err, output: <nil>: no-preload-20220921220832-10174
	
	I0921 22:21:25.619917  276511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220832-10174
	I0921 22:21:25.644377  276511 main.go:134] libmachine: Using SSH client type: native
	I0921 22:21:25.644520  276511 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ec520] 0x7ef6a0 <nil>  [] 0s} 127.0.0.1 49438 <nil> <nil>}
	I0921 22:21:25.644541  276511 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-20220921220832-10174' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-20220921220832-10174/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-20220921220832-10174' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0921 22:21:25.771438  276511 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0921 22:21:25.771470  276511 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube}
	I0921 22:21:25.771545  276511 ubuntu.go:177] setting up certificates
	I0921 22:21:25.771554  276511 provision.go:83] configureAuth start
	I0921 22:21:25.771606  276511 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20220921220832-10174
	I0921 22:21:25.795693  276511 provision.go:138] copyHostCerts
	I0921 22:21:25.795778  276511 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.pem, removing ...
	I0921 22:21:25.795798  276511 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.pem
	I0921 22:21:25.795864  276511 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.pem (1078 bytes)
	I0921 22:21:25.795944  276511 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cert.pem, removing ...
	I0921 22:21:25.795955  276511 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cert.pem
	I0921 22:21:25.795981  276511 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cert.pem (1123 bytes)
	I0921 22:21:25.796035  276511 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/key.pem, removing ...
	I0921 22:21:25.796044  276511 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/key.pem
	I0921 22:21:25.796066  276511 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/key.pem (1675 bytes)
	I0921 22:21:25.796151  276511 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca-key.pem org=jenkins.no-preload-20220921220832-10174 san=[192.168.94.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-20220921220832-10174]
	I0921 22:21:25.980041  276511 provision.go:172] copyRemoteCerts
	I0921 22:21:25.980129  276511 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0921 22:21:25.980174  276511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220832-10174
	I0921 22:21:26.005654  276511 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49438 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/no-preload-20220921220832-10174/id_rsa Username:docker}
	I0921 22:21:26.099196  276511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0921 22:21:26.116665  276511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0921 22:21:26.133700  276511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0921 22:21:26.150095  276511 provision.go:86] duration metric: configureAuth took 378.527139ms
	I0921 22:21:26.150126  276511 ubuntu.go:193] setting minikube options for container-runtime
	I0921 22:21:26.150282  276511 config.go:180] Loaded profile config "no-preload-20220921220832-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.2
	I0921 22:21:26.150293  276511 machine.go:91] provisioned docker machine in 3.697591605s
	I0921 22:21:26.150301  276511 start.go:300] post-start starting for "no-preload-20220921220832-10174" (driver="docker")
	I0921 22:21:26.150307  276511 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0921 22:21:26.150350  276511 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0921 22:21:26.150391  276511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220832-10174
	I0921 22:21:26.177098  276511 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49438 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/no-preload-20220921220832-10174/id_rsa Username:docker}
	I0921 22:21:26.266994  276511 ssh_runner.go:195] Run: cat /etc/os-release
	I0921 22:21:26.269733  276511 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0921 22:21:26.269758  276511 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0921 22:21:26.269766  276511 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0921 22:21:26.269773  276511 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0921 22:21:26.269784  276511 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/addons for local assets ...
	I0921 22:21:26.269843  276511 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files for local assets ...
	I0921 22:21:26.269931  276511 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/101742.pem -> 101742.pem in /etc/ssl/certs
	I0921 22:21:26.270038  276511 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0921 22:21:26.276595  276511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/101742.pem --> /etc/ssl/certs/101742.pem (1708 bytes)
	I0921 22:21:26.293384  276511 start.go:303] post-start completed in 143.069577ms
	I0921 22:21:26.293459  276511 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:21:26.293509  276511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220832-10174
	I0921 22:21:26.319279  276511 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49438 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/no-preload-20220921220832-10174/id_rsa Username:docker}
	I0921 22:21:26.412318  276511 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:21:26.416228  276511 fix.go:57] fixHost completed within 4.411406055s
	I0921 22:21:26.416252  276511 start.go:83] releasing machines lock for "no-preload-20220921220832-10174", held for 4.411447835s
	I0921 22:21:26.416336  276511 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20220921220832-10174
	I0921 22:21:26.439824  276511 ssh_runner.go:195] Run: systemctl --version
	I0921 22:21:26.439875  276511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220832-10174
	I0921 22:21:26.439894  276511 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0921 22:21:26.439973  276511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220832-10174
	I0921 22:21:26.463981  276511 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49438 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/no-preload-20220921220832-10174/id_rsa Username:docker}
	I0921 22:21:26.464292  276511 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49438 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/no-preload-20220921220832-10174/id_rsa Username:docker}
	I0921 22:21:26.585502  276511 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0921 22:21:26.597003  276511 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0921 22:21:26.606196  276511 docker.go:188] disabling docker service ...
	I0921 22:21:26.606244  276511 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0921 22:21:26.615407  276511 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0921 22:21:26.623690  276511 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0921 22:21:26.699874  276511 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0921 22:21:24.477809  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:21:26.976994  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:21:26.778612  276511 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0921 22:21:26.787337  276511 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0921 22:21:26.799540  276511 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "registry.k8s.io/pause:3.8"|' -i /etc/containerd/config.toml"
	I0921 22:21:26.807935  276511 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0921 22:21:26.815661  276511 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0921 22:21:26.823769  276511 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0921 22:21:26.831216  276511 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0921 22:21:26.837204  276511 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0921 22:21:26.843235  276511 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0921 22:21:26.913162  276511 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0921 22:21:26.985402  276511 start.go:450] Will wait 60s for socket path /run/containerd/containerd.sock
	I0921 22:21:26.985482  276511 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0921 22:21:26.989229  276511 start.go:471] Will wait 60s for crictl version
	I0921 22:21:26.989292  276511 ssh_runner.go:195] Run: sudo crictl version
	I0921 22:21:27.015951  276511 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-09-21T22:21:27Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0921 22:21:28.977565  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:21:31.477682  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:21:33.976943  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:21:36.476620  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:21:38.063256  276511 ssh_runner.go:195] Run: sudo crictl version
	I0921 22:21:38.087330  276511 start.go:480] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.8
	RuntimeApiVersion:  v1alpha2
	I0921 22:21:38.087394  276511 ssh_runner.go:195] Run: containerd --version
	I0921 22:21:38.117027  276511 ssh_runner.go:195] Run: containerd --version
	I0921 22:21:38.148570  276511 out.go:177] * Preparing Kubernetes v1.25.2 on containerd 1.6.8 ...
	I0921 22:21:38.150093  276511 cli_runner.go:164] Run: docker network inspect no-preload-20220921220832-10174 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 22:21:38.172557  276511 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0921 22:21:38.175833  276511 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0921 22:21:38.185102  276511 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime containerd
	I0921 22:21:38.185143  276511 ssh_runner.go:195] Run: sudo crictl images --output json
	I0921 22:21:38.207088  276511 containerd.go:553] all images are preloaded for containerd runtime.
	I0921 22:21:38.207109  276511 cache_images.go:84] Images are preloaded, skipping loading
	I0921 22:21:38.207180  276511 ssh_runner.go:195] Run: sudo crictl info
	I0921 22:21:38.230239  276511 cni.go:95] Creating CNI manager for ""
	I0921 22:21:38.230269  276511 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0921 22:21:38.230283  276511 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0921 22:21:38.230305  276511 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.25.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-20220921220832-10174 NodeName:no-preload-20220921220832-10174 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/mini
kube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
	I0921 22:21:38.230491  276511 kubeadm.go:161] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "no-preload-20220921220832-10174"
	  kubeletExtraArgs:
	    node-ip: 192.168.94.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0921 22:21:38.230603  276511 kubeadm.go:962] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=no-preload-20220921220832-10174 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.2 ClusterName:no-preload-20220921220832-10174 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0921 22:21:38.230653  276511 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.2
	I0921 22:21:38.237825  276511 binaries.go:44] Found k8s binaries, skipping transfer
	I0921 22:21:38.237881  276511 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0921 22:21:38.244824  276511 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (524 bytes)
	I0921 22:21:38.257993  276511 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0921 22:21:38.270025  276511 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2060 bytes)
	I0921 22:21:38.282061  276511 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0921 22:21:38.285065  276511 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0921 22:21:38.294394  276511 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/no-preload-20220921220832-10174 for IP: 192.168.94.2
	I0921 22:21:38.294515  276511 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.key
	I0921 22:21:38.294555  276511 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/proxy-client-ca.key
	I0921 22:21:38.294619  276511 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/no-preload-20220921220832-10174/client.key
	I0921 22:21:38.294690  276511 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/no-preload-20220921220832-10174/apiserver.key.ad8e880a
	I0921 22:21:38.294731  276511 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/no-preload-20220921220832-10174/proxy-client.key
	I0921 22:21:38.294821  276511 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/10174.pem (1338 bytes)
	W0921 22:21:38.294848  276511 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/10174_empty.pem, impossibly tiny 0 bytes
	I0921 22:21:38.294860  276511 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca-key.pem (1679 bytes)
	I0921 22:21:38.294885  276511 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem (1078 bytes)
	I0921 22:21:38.294912  276511 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/cert.pem (1123 bytes)
	I0921 22:21:38.294934  276511 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/key.pem (1675 bytes)
	I0921 22:21:38.294971  276511 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/101742.pem (1708 bytes)
	I0921 22:21:38.295476  276511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/no-preload-20220921220832-10174/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0921 22:21:38.312346  276511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/no-preload-20220921220832-10174/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0921 22:21:38.328491  276511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/no-preload-20220921220832-10174/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0921 22:21:38.344965  276511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/no-preload-20220921220832-10174/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0921 22:21:38.361363  276511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0921 22:21:38.378193  276511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0921 22:21:38.394663  276511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0921 22:21:38.411219  276511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0921 22:21:38.427455  276511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/10174.pem --> /usr/share/ca-certificates/10174.pem (1338 bytes)
	I0921 22:21:38.443759  276511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/101742.pem --> /usr/share/ca-certificates/101742.pem (1708 bytes)
	I0921 22:21:38.459952  276511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0921 22:21:38.477220  276511 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0921 22:21:38.490029  276511 ssh_runner.go:195] Run: openssl version
	I0921 22:21:38.494865  276511 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10174.pem && ln -fs /usr/share/ca-certificates/10174.pem /etc/ssl/certs/10174.pem"
	I0921 22:21:38.502105  276511 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10174.pem
	I0921 22:21:38.505092  276511 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Sep 21 21:32 /usr/share/ca-certificates/10174.pem
	I0921 22:21:38.505143  276511 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10174.pem
	I0921 22:21:38.510082  276511 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10174.pem /etc/ssl/certs/51391683.0"
	I0921 22:21:38.516779  276511 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/101742.pem && ln -fs /usr/share/ca-certificates/101742.pem /etc/ssl/certs/101742.pem"
	I0921 22:21:38.524387  276511 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/101742.pem
	I0921 22:21:38.527407  276511 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Sep 21 21:32 /usr/share/ca-certificates/101742.pem
	I0921 22:21:38.527449  276511 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/101742.pem
	I0921 22:21:38.532184  276511 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/101742.pem /etc/ssl/certs/3ec20f2e.0"
	I0921 22:21:38.538593  276511 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0921 22:21:38.545959  276511 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0921 22:21:38.548914  276511 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Sep 21 21:28 /usr/share/ca-certificates/minikubeCA.pem
	I0921 22:21:38.548957  276511 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0921 22:21:38.553573  276511 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0921 22:21:38.560211  276511 kubeadm.go:396] StartCluster: {Name:no-preload-20220921220832-10174 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:no-preload-20220921220832-10174 Namespace:default APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 22:21:38.560292  276511 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0921 22:21:38.560329  276511 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0921 22:21:38.584578  276511 cri.go:87] found id: "8c81e8e062ce578d93aaa7742d6ea9313a45d09b7520be04efd4ddd67bd4734b"
	I0921 22:21:38.584604  276511 cri.go:87] found id: "3eefbcb898b092a06a25d126ffaed982346f2b3691953b1cfaddc748fee80843"
	I0921 22:21:38.584611  276511 cri.go:87] found id: "6a4b91f0531d15b584576f58078373b62517efb45934f9088111d572c5de1646"
	I0921 22:21:38.584617  276511 cri.go:87] found id: "a9c3d39d9942f75aca084150dc32f191aed558a4eca281497df9885089a3cacb"
	I0921 22:21:38.584622  276511 cri.go:87] found id: "b69529a7e224f2addaf76a8ffcfd12b9dfc7f85844417e58938c8a2a1dc26be0"
	I0921 22:21:38.584629  276511 cri.go:87] found id: "b1a22ede66e3113827144deb16ed1bddbe2e6b60af54e01396e6daf924a4c409"
	I0921 22:21:38.584635  276511 cri.go:87] found id: ""
	I0921 22:21:38.584680  276511 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0921 22:21:38.597489  276511 cri.go:114] JSON = null
	W0921 22:21:38.597556  276511 kubeadm.go:403] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I0921 22:21:38.597640  276511 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0921 22:21:38.604641  276511 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I0921 22:21:38.604678  276511 kubeadm.go:627] restartCluster start
	I0921 22:21:38.604716  276511 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0921 22:21:38.611273  276511 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:21:38.611984  276511 kubeconfig.go:116] verify returned: extract IP: "no-preload-20220921220832-10174" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
	I0921 22:21:38.612435  276511 kubeconfig.go:127] "no-preload-20220921220832-10174" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig - will repair!
	I0921 22:21:38.613052  276511 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig: {Name:mkdec5faf1bb182b50a3cd5458b352f38075dc15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:21:38.614343  276511 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0921 22:21:38.620864  276511 api_server.go:165] Checking apiserver status ...
	I0921 22:21:38.620917  276511 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:21:38.628681  276511 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:21:38.829072  276511 api_server.go:165] Checking apiserver status ...
	I0921 22:21:38.829161  276511 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:21:38.837312  276511 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:21:39.029609  276511 api_server.go:165] Checking apiserver status ...
	I0921 22:21:39.029716  276511 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:21:39.038394  276511 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:21:39.229726  276511 api_server.go:165] Checking apiserver status ...
	I0921 22:21:39.229799  276511 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:21:39.238375  276511 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:21:39.429768  276511 api_server.go:165] Checking apiserver status ...
	I0921 22:21:39.429867  276511 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:21:39.438213  276511 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:21:39.629500  276511 api_server.go:165] Checking apiserver status ...
	I0921 22:21:39.629592  276511 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:21:39.638208  276511 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:21:39.829520  276511 api_server.go:165] Checking apiserver status ...
	I0921 22:21:39.829665  276511 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:21:39.838208  276511 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:21:40.029479  276511 api_server.go:165] Checking apiserver status ...
	I0921 22:21:40.029573  276511 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:21:40.038635  276511 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:21:40.228885  276511 api_server.go:165] Checking apiserver status ...
	I0921 22:21:40.228956  276511 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:21:40.237569  276511 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:21:40.429785  276511 api_server.go:165] Checking apiserver status ...
	I0921 22:21:40.429859  276511 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:21:40.438642  276511 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:21:40.628883  276511 api_server.go:165] Checking apiserver status ...
	I0921 22:21:40.628958  276511 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:21:40.637446  276511 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:21:40.829709  276511 api_server.go:165] Checking apiserver status ...
	I0921 22:21:40.829789  276511 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:21:40.838273  276511 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:21:41.029560  276511 api_server.go:165] Checking apiserver status ...
	I0921 22:21:41.029638  276511 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:21:41.038065  276511 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:21:41.229380  276511 api_server.go:165] Checking apiserver status ...
	I0921 22:21:41.229482  276511 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:21:41.238040  276511 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:21:41.429329  276511 api_server.go:165] Checking apiserver status ...
	I0921 22:21:41.429408  276511 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:21:41.437964  276511 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:21:41.629268  276511 api_server.go:165] Checking apiserver status ...
	I0921 22:21:41.629339  276511 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:21:41.637793  276511 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:21:41.637813  276511 api_server.go:165] Checking apiserver status ...
	I0921 22:21:41.637849  276511 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:21:41.645663  276511 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:21:41.645692  276511 kubeadm.go:602] needs reconfigure: apiserver error: timed out waiting for the condition
	I0921 22:21:41.645700  276511 kubeadm.go:1114] stopping kube-system containers ...
	I0921 22:21:41.645711  276511 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0921 22:21:41.645761  276511 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0921 22:21:41.669678  276511 cri.go:87] found id: "8c81e8e062ce578d93aaa7742d6ea9313a45d09b7520be04efd4ddd67bd4734b"
	I0921 22:21:41.669709  276511 cri.go:87] found id: "3eefbcb898b092a06a25d126ffaed982346f2b3691953b1cfaddc748fee80843"
	I0921 22:21:41.669719  276511 cri.go:87] found id: "6a4b91f0531d15b584576f58078373b62517efb45934f9088111d572c5de1646"
	I0921 22:21:41.669728  276511 cri.go:87] found id: "a9c3d39d9942f75aca084150dc32f191aed558a4eca281497df9885089a3cacb"
	I0921 22:21:41.669736  276511 cri.go:87] found id: "b69529a7e224f2addaf76a8ffcfd12b9dfc7f85844417e58938c8a2a1dc26be0"
	I0921 22:21:41.669746  276511 cri.go:87] found id: "b1a22ede66e3113827144deb16ed1bddbe2e6b60af54e01396e6daf924a4c409"
	I0921 22:21:41.669758  276511 cri.go:87] found id: ""
	I0921 22:21:41.669765  276511 cri.go:232] Stopping containers: [8c81e8e062ce578d93aaa7742d6ea9313a45d09b7520be04efd4ddd67bd4734b 3eefbcb898b092a06a25d126ffaed982346f2b3691953b1cfaddc748fee80843 6a4b91f0531d15b584576f58078373b62517efb45934f9088111d572c5de1646 a9c3d39d9942f75aca084150dc32f191aed558a4eca281497df9885089a3cacb b69529a7e224f2addaf76a8ffcfd12b9dfc7f85844417e58938c8a2a1dc26be0 b1a22ede66e3113827144deb16ed1bddbe2e6b60af54e01396e6daf924a4c409]
	I0921 22:21:41.669831  276511 ssh_runner.go:195] Run: which crictl
	I0921 22:21:41.672722  276511 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop 8c81e8e062ce578d93aaa7742d6ea9313a45d09b7520be04efd4ddd67bd4734b 3eefbcb898b092a06a25d126ffaed982346f2b3691953b1cfaddc748fee80843 6a4b91f0531d15b584576f58078373b62517efb45934f9088111d572c5de1646 a9c3d39d9942f75aca084150dc32f191aed558a4eca281497df9885089a3cacb b69529a7e224f2addaf76a8ffcfd12b9dfc7f85844417e58938c8a2a1dc26be0 b1a22ede66e3113827144deb16ed1bddbe2e6b60af54e01396e6daf924a4c409
	I0921 22:21:41.698115  276511 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0921 22:21:41.708176  276511 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0921 22:21:41.715094  276511 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Sep 21 22:08 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Sep 21 22:08 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2063 Sep 21 22:08 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Sep 21 22:08 /etc/kubernetes/scheduler.conf
	
	I0921 22:21:41.715152  276511 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0921 22:21:41.721698  276511 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0921 22:21:41.728286  276511 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0921 22:21:38.477722  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:21:40.976919  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:21:42.977016  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:21:41.734815  276511 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:21:41.734874  276511 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0921 22:21:41.741153  276511 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0921 22:21:41.747551  276511 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:21:41.747599  276511 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0921 22:21:41.753773  276511 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0921 22:21:41.760238  276511 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0921 22:21:41.760255  276511 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0921 22:21:41.804588  276511 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0921 22:21:42.356962  276511 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0921 22:21:42.489434  276511 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0921 22:21:42.539390  276511 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0921 22:21:42.683809  276511 api_server.go:51] waiting for apiserver process to appear ...
	I0921 22:21:42.683920  276511 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 22:21:43.194560  276511 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 22:21:43.694761  276511 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 22:21:43.776158  276511 api_server.go:71] duration metric: took 1.092348408s to wait for apiserver process to appear ...
	I0921 22:21:43.776236  276511 api_server.go:87] waiting for apiserver healthz status ...
	I0921 22:21:43.776260  276511 api_server.go:240] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0921 22:21:43.776614  276511 api_server.go:256] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I0921 22:21:44.276913  276511 api_server.go:240] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0921 22:21:46.667105  276511 api_server.go:266] https://192.168.94.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0921 22:21:46.667136  276511 api_server.go:102] status: https://192.168.94.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0921 22:21:45.477739  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:21:47.976841  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:21:46.777448  276511 api_server.go:240] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0921 22:21:46.781780  276511 api_server.go:266] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0921 22:21:46.781806  276511 api_server.go:102] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0921 22:21:47.277400  276511 api_server.go:240] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0921 22:21:47.282106  276511 api_server.go:266] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0921 22:21:47.282133  276511 api_server.go:102] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0921 22:21:47.777302  276511 api_server.go:240] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0921 22:21:47.781834  276511 api_server.go:266] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0921 22:21:47.781871  276511 api_server.go:102] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0921 22:21:48.277407  276511 api_server.go:240] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0921 22:21:48.283340  276511 api_server.go:266] https://192.168.94.2:8443/healthz returned 200:
	ok
	I0921 22:21:48.290556  276511 api_server.go:140] control plane version: v1.25.2
	I0921 22:21:48.290586  276511 api_server.go:130] duration metric: took 4.514332252s to wait for apiserver health ...
	I0921 22:21:48.290599  276511 cni.go:95] Creating CNI manager for ""
	I0921 22:21:48.290609  276511 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0921 22:21:48.293728  276511 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0921 22:21:48.295168  276511 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0921 22:21:48.298937  276511 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.2/kubectl ...
	I0921 22:21:48.298959  276511 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0921 22:21:48.313543  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0921 22:21:49.163078  276511 system_pods.go:43] waiting for kube-system pods to appear ...
	I0921 22:21:49.170085  276511 system_pods.go:59] 9 kube-system pods found
	I0921 22:21:49.170122  276511 system_pods.go:61] "coredns-565d847f94-m8xgt" [67685b7a-28c7-49a1-a4aa-e82aadc5a69b] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0921 22:21:49.170132  276511 system_pods.go:61] "etcd-no-preload-20220921220832-10174" [0fca2788-2ad8-4e18-b8e5-e39cefa36c58] Running
	I0921 22:21:49.170141  276511 system_pods.go:61] "kindnet-27cj5" [90383218-a547-458a-8b5e-af84c9d2b017] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0921 22:21:49.170148  276511 system_pods.go:61] "kube-apiserver-no-preload-20220921220832-10174" [3d9f96c7-a367-41ec-8423-c106fa567853] Running
	I0921 22:21:49.170160  276511 system_pods.go:61] "kube-controller-manager-no-preload-20220921220832-10174" [86ad77b8-aa2b-4d95-a588-48d9493546d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0921 22:21:49.170171  276511 system_pods.go:61] "kube-proxy-nxpf5" [ff6290f8-6cb7-4fae-99a2-7e36bb2e525b] Running
	I0921 22:21:49.170182  276511 system_pods.go:61] "kube-scheduler-no-preload-20220921220832-10174" [9c1e10b4-b7eb-4633-a544-62cbe7ed19d1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0921 22:21:49.170196  276511 system_pods.go:61] "metrics-server-5c8fd5cf8-l82b6" [c17d4483-0758-4a2c-b310-2451393c8fa9] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0921 22:21:49.170208  276511 system_pods.go:61] "storage-provisioner" [51a29d45-5827-48fc-a122-67c7c5c5d190] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0921 22:21:49.170220  276511 system_pods.go:74] duration metric: took 7.119308ms to wait for pod list to return data ...
	I0921 22:21:49.170236  276511 node_conditions.go:102] verifying NodePressure condition ...
	I0921 22:21:49.172624  276511 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0921 22:21:49.172663  276511 node_conditions.go:123] node cpu capacity is 8
	I0921 22:21:49.172674  276511 node_conditions.go:105] duration metric: took 2.43038ms to run NodePressure ...
	I0921 22:21:49.172699  276511 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0921 22:21:49.303995  276511 kubeadm.go:763] waiting for restarted kubelet to initialise ...
	I0921 22:21:49.307574  276511 kubeadm.go:778] kubelet initialised
	I0921 22:21:49.307598  276511 kubeadm.go:779] duration metric: took 3.577635ms waiting for restarted kubelet to initialise ...
	I0921 22:21:49.307604  276511 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0921 22:21:49.312287  276511 pod_ready.go:78] waiting up to 4m0s for pod "coredns-565d847f94-m8xgt" in "kube-system" namespace to be "Ready" ...
	I0921 22:21:51.318183  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:21:50.476802  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:21:52.977118  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:21:53.818525  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:21:56.318234  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:21:55.477148  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:21:57.473940  265259 pod_ready.go:81] duration metric: took 4m0.002309063s waiting for pod "coredns-565d847f94-qn9gp" in "kube-system" namespace to be "Ready" ...
	E0921 22:21:57.473968  265259 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-565d847f94-qn9gp" in "kube-system" namespace to be "Ready" (will not retry!)
	I0921 22:21:57.473989  265259 pod_ready.go:38] duration metric: took 4m0.007491689s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0921 22:21:57.474010  265259 kubeadm.go:631] restartCluster took 4m12.311396089s
	W0921 22:21:57.474123  265259 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0921 22:21:57.474151  265259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0921 22:22:00.342329  265259 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (2.868152928s)
	I0921 22:22:00.342387  265259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0921 22:22:00.351706  265259 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0921 22:22:00.358843  265259 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0921 22:22:00.358897  265259 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0921 22:22:00.365576  265259 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0921 22:22:00.365616  265259 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0921 22:22:00.405287  265259 kubeadm.go:317] [init] Using Kubernetes version: v1.25.2
	I0921 22:22:00.405348  265259 kubeadm.go:317] [preflight] Running pre-flight checks
	I0921 22:22:00.433369  265259 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I0921 22:22:00.433451  265259 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1017-gcp
	I0921 22:22:00.433486  265259 kubeadm.go:317] OS: Linux
	I0921 22:22:00.433611  265259 kubeadm.go:317] CGROUPS_CPU: enabled
	I0921 22:22:00.433682  265259 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I0921 22:22:00.433726  265259 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I0921 22:22:00.433768  265259 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I0921 22:22:00.433805  265259 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I0921 22:22:00.433852  265259 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I0921 22:22:00.433893  265259 kubeadm.go:317] CGROUPS_PIDS: enabled
	I0921 22:22:00.434000  265259 kubeadm.go:317] CGROUPS_HUGETLB: enabled
	I0921 22:22:00.434102  265259 kubeadm.go:317] CGROUPS_BLKIO: enabled
	I0921 22:22:00.502463  265259 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0921 22:22:00.502591  265259 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0921 22:22:00.502721  265259 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0921 22:22:00.621941  265259 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0921 22:22:00.626833  265259 out.go:204]   - Generating certificates and keys ...
	I0921 22:22:00.626978  265259 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0921 22:22:00.627053  265259 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0921 22:22:00.627158  265259 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0921 22:22:00.627246  265259 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I0921 22:22:00.627351  265259 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I0921 22:22:00.627410  265259 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I0921 22:22:00.627483  265259 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I0921 22:22:00.627551  265259 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I0921 22:22:00.627613  265259 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0921 22:22:00.627685  265259 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0921 22:22:00.627760  265259 kubeadm.go:317] [certs] Using the existing "sa" key
	I0921 22:22:00.627816  265259 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0921 22:22:00.721598  265259 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0921 22:22:00.898538  265259 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0921 22:22:00.999773  265259 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0921 22:22:01.056843  265259 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0921 22:22:01.068556  265259 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0921 22:22:01.069535  265259 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0921 22:22:01.069603  265259 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I0921 22:22:01.152435  265259 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0921 22:21:58.818354  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:22:01.317924  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:22:01.154531  265259 out.go:204]   - Booting up control plane ...
	I0921 22:22:01.154652  265259 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0921 22:22:01.154956  265259 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0921 22:22:01.156705  265259 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0921 22:22:01.157879  265259 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0921 22:22:01.159675  265259 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0921 22:22:03.318485  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:22:05.318662  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:22:07.161939  265259 kubeadm.go:317] [apiclient] All control plane components are healthy after 6.002148 seconds
	I0921 22:22:07.162112  265259 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0921 22:22:07.170819  265259 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0921 22:22:07.689049  265259 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs
	I0921 22:22:07.689253  265259 kubeadm.go:317] [mark-control-plane] Marking the node embed-certs-20220921220439-10174 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0921 22:22:08.196658  265259 kubeadm.go:317] [bootstrap-token] Using token: 6acdlb.hwh133k5t8mfdxv9
	I0921 22:22:08.198133  265259 out.go:204]   - Configuring RBAC rules ...
	I0921 22:22:08.198241  265259 kubeadm.go:317] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0921 22:22:08.202013  265259 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0921 22:22:08.206522  265259 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0921 22:22:08.208686  265259 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0921 22:22:08.210651  265259 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0921 22:22:08.212506  265259 kubeadm.go:317] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0921 22:22:08.219466  265259 kubeadm.go:317] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0921 22:22:08.394068  265259 kubeadm.go:317] [addons] Applied essential addon: CoreDNS
	I0921 22:22:08.606483  265259 kubeadm.go:317] [addons] Applied essential addon: kube-proxy
	I0921 22:22:08.607838  265259 kubeadm.go:317] 
	I0921 22:22:08.607927  265259 kubeadm.go:317] Your Kubernetes control-plane has initialized successfully!
	I0921 22:22:08.607941  265259 kubeadm.go:317] 
	I0921 22:22:08.608028  265259 kubeadm.go:317] To start using your cluster, you need to run the following as a regular user:
	I0921 22:22:08.608042  265259 kubeadm.go:317] 
	I0921 22:22:08.608070  265259 kubeadm.go:317]   mkdir -p $HOME/.kube
	I0921 22:22:08.608136  265259 kubeadm.go:317]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0921 22:22:08.608199  265259 kubeadm.go:317]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0921 22:22:08.608205  265259 kubeadm.go:317] 
	I0921 22:22:08.608270  265259 kubeadm.go:317] Alternatively, if you are the root user, you can run:
	I0921 22:22:08.608279  265259 kubeadm.go:317] 
	I0921 22:22:08.608333  265259 kubeadm.go:317]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0921 22:22:08.608341  265259 kubeadm.go:317] 
	I0921 22:22:08.608405  265259 kubeadm.go:317] You should now deploy a pod network to the cluster.
	I0921 22:22:08.608491  265259 kubeadm.go:317] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0921 22:22:08.608575  265259 kubeadm.go:317]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0921 22:22:08.608582  265259 kubeadm.go:317] 
	I0921 22:22:08.608682  265259 kubeadm.go:317] You can now join any number of control-plane nodes by copying certificate authorities
	I0921 22:22:08.608771  265259 kubeadm.go:317] and service account keys on each node and then running the following as root:
	I0921 22:22:08.608778  265259 kubeadm.go:317] 
	I0921 22:22:08.608870  265259 kubeadm.go:317]   kubeadm join control-plane.minikube.internal:8443 --token 6acdlb.hwh133k5t8mfdxv9 \
	I0921 22:22:08.608983  265259 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:b419e756666ce2e30bdac31039f21ad7242d26e2b9c03a55c5bc6fe8dcfddcf7 \
	I0921 22:22:08.609009  265259 kubeadm.go:317] 	--control-plane 
	I0921 22:22:08.609016  265259 kubeadm.go:317] 
	I0921 22:22:08.609128  265259 kubeadm.go:317] Then you can join any number of worker nodes by running the following on each as root:
	I0921 22:22:08.609134  265259 kubeadm.go:317] 
	I0921 22:22:08.609197  265259 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8443 --token 6acdlb.hwh133k5t8mfdxv9 \
	I0921 22:22:08.609284  265259 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:b419e756666ce2e30bdac31039f21ad7242d26e2b9c03a55c5bc6fe8dcfddcf7 
	I0921 22:22:08.611756  265259 kubeadm.go:317] W0921 22:22:00.400408    3296 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0921 22:22:08.612043  265259 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1017-gcp\n", err: exit status 1
	I0921 22:22:08.612188  265259 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0921 22:22:08.612219  265259 cni.go:95] Creating CNI manager for ""
	I0921 22:22:08.612229  265259 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0921 22:22:08.614511  265259 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0921 22:22:07.818323  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:22:10.317822  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:22:08.615918  265259 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0921 22:22:08.676246  265259 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.2/kubectl ...
	I0921 22:22:08.676274  265259 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0921 22:22:08.693653  265259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0921 22:22:09.436658  265259 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0921 22:22:09.436794  265259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:22:09.436795  265259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl label nodes minikube.k8s.io/version=v1.27.0 minikube.k8s.io/commit=937c68716dfaac5b5ffa3b6655158d5d3472b8c4 minikube.k8s.io/name=embed-certs-20220921220439-10174 minikube.k8s.io/updated_at=2022_09_21T22_22_09_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:22:09.443334  265259 ops.go:34] apiserver oom_adj: -16
	I0921 22:22:09.528958  265259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:22:10.110988  265259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:22:10.611127  265259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:22:11.110374  265259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:22:11.610630  265259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:22:12.110987  265259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:22:12.611437  265259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:22:12.318364  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:22:14.318738  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:22:13.110661  265259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:22:13.610755  265259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:22:14.111256  265259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:22:14.610806  265259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:22:15.110885  265259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:22:15.610612  265259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:22:16.111276  265259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:22:16.611348  265259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:22:17.110520  265259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:22:17.610377  265259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:22:16.817958  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:22:18.818500  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:22:21.317777  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:22:18.110696  265259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:22:18.610392  265259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:22:19.110996  265259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:22:19.611138  265259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:22:20.111351  265259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:22:20.610608  265259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:22:21.111055  265259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:22:21.611123  265259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:22:21.738997  265259 kubeadm.go:1067] duration metric: took 12.302271715s to wait for elevateKubeSystemPrivileges.
	I0921 22:22:21.739026  265259 kubeadm.go:398] StartCluster complete in 4m36.622037809s
	I0921 22:22:21.739041  265259 settings.go:142] acquiring lock: {Name:mk2e017b0c75e33bad2b3a546d00214b38a21694 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:22:21.739131  265259 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
	I0921 22:22:21.740483  265259 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig: {Name:mkdec5faf1bb182b50a3cd5458b352f38075dc15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:22:22.256205  265259 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "embed-certs-20220921220439-10174" rescaled to 1
	I0921 22:22:22.256273  265259 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0921 22:22:22.260022  265259 out.go:177] * Verifying Kubernetes components...
	I0921 22:22:22.256319  265259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0921 22:22:22.256360  265259 addons.go:412] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0921 22:22:22.256527  265259 config.go:180] Loaded profile config "embed-certs-20220921220439-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.2
	I0921 22:22:22.261883  265259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0921 22:22:22.261936  265259 addons.go:65] Setting dashboard=true in profile "embed-certs-20220921220439-10174"
	I0921 22:22:22.261954  265259 addons.go:65] Setting metrics-server=true in profile "embed-certs-20220921220439-10174"
	I0921 22:22:22.261957  265259 addons.go:65] Setting default-storageclass=true in profile "embed-certs-20220921220439-10174"
	I0921 22:22:22.261939  265259 addons.go:65] Setting storage-provisioner=true in profile "embed-certs-20220921220439-10174"
	I0921 22:22:22.261968  265259 addons.go:153] Setting addon metrics-server=true in "embed-certs-20220921220439-10174"
	W0921 22:22:22.261978  265259 addons.go:162] addon metrics-server should already be in state true
	I0921 22:22:22.261978  265259 addons.go:153] Setting addon storage-provisioner=true in "embed-certs-20220921220439-10174"
	I0921 22:22:22.261965  265259 addons.go:153] Setting addon dashboard=true in "embed-certs-20220921220439-10174"
	W0921 22:22:22.261991  265259 addons.go:162] addon dashboard should already be in state true
	I0921 22:22:22.261979  265259 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-20220921220439-10174"
	I0921 22:22:22.262027  265259 host.go:66] Checking if "embed-certs-20220921220439-10174" exists ...
	I0921 22:22:22.262047  265259 host.go:66] Checking if "embed-certs-20220921220439-10174" exists ...
	W0921 22:22:22.261992  265259 addons.go:162] addon storage-provisioner should already be in state true
	I0921 22:22:22.262152  265259 host.go:66] Checking if "embed-certs-20220921220439-10174" exists ...
	I0921 22:22:22.262334  265259 cli_runner.go:164] Run: docker container inspect embed-certs-20220921220439-10174 --format={{.State.Status}}
	I0921 22:22:22.262530  265259 cli_runner.go:164] Run: docker container inspect embed-certs-20220921220439-10174 --format={{.State.Status}}
	I0921 22:22:22.262581  265259 cli_runner.go:164] Run: docker container inspect embed-certs-20220921220439-10174 --format={{.State.Status}}
	I0921 22:22:22.262607  265259 cli_runner.go:164] Run: docker container inspect embed-certs-20220921220439-10174 --format={{.State.Status}}
	I0921 22:22:22.275321  265259 node_ready.go:35] waiting up to 6m0s for node "embed-certs-20220921220439-10174" to be "Ready" ...
	I0921 22:22:22.305122  265259 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.6.0
	I0921 22:22:22.302980  265259 addons.go:153] Setting addon default-storageclass=true in "embed-certs-20220921220439-10174"
	I0921 22:22:22.308412  265259 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0921 22:22:22.306819  265259 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W0921 22:22:22.306823  265259 addons.go:162] addon default-storageclass should already be in state true
	I0921 22:22:22.310056  265259 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0921 22:22:22.310063  265259 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0921 22:22:22.311661  265259 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0921 22:22:22.311677  265259 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0921 22:22:22.310084  265259 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0921 22:22:22.313452  265259 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0921 22:22:22.313470  265259 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0921 22:22:22.313520  265259 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220439-10174
	I0921 22:22:22.310090  265259 host.go:66] Checking if "embed-certs-20220921220439-10174" exists ...
	I0921 22:22:22.311776  265259 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220439-10174
	I0921 22:22:22.311792  265259 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220439-10174
	I0921 22:22:22.314077  265259 cli_runner.go:164] Run: docker container inspect embed-certs-20220921220439-10174 --format={{.State.Status}}
	I0921 22:22:22.348685  265259 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0921 22:22:22.348718  265259 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0921 22:22:22.348780  265259 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220439-10174
	I0921 22:22:22.350631  265259 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49428 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/embed-certs-20220921220439-10174/id_rsa Username:docker}
	I0921 22:22:22.350663  265259 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49428 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/embed-certs-20220921220439-10174/id_rsa Username:docker}
	I0921 22:22:22.355060  265259 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49428 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/embed-certs-20220921220439-10174/id_rsa Username:docker}
	I0921 22:22:22.379378  265259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0921 22:22:22.385678  265259 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49428 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/embed-certs-20220921220439-10174/id_rsa Username:docker}
	I0921 22:22:22.494559  265259 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0921 22:22:22.494597  265259 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0921 22:22:22.494751  265259 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0921 22:22:22.494781  265259 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0921 22:22:22.500109  265259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0921 22:22:22.589944  265259 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0921 22:22:22.589985  265259 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0921 22:22:22.592642  265259 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0921 22:22:22.592670  265259 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0921 22:22:22.598738  265259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0921 22:22:22.686188  265259 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0921 22:22:22.686217  265259 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0921 22:22:22.692562  265259 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0921 22:22:22.692589  265259 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0921 22:22:22.777831  265259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0921 22:22:22.795247  265259 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0921 22:22:22.795282  265259 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4206 bytes)
	I0921 22:22:22.886056  265259 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0921 22:22:22.886085  265259 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0921 22:22:22.987013  265259 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0921 22:22:22.987091  265259 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0921 22:22:23.081859  265259 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0921 22:22:23.081893  265259 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0921 22:22:23.177114  265259 start.go:810] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS
	I0921 22:22:23.181203  265259 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0921 22:22:23.181235  265259 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0921 22:22:23.203322  265259 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0921 22:22:23.203354  265259 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0921 22:22:23.284061  265259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0921 22:22:23.584574  265259 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.084417298s)
	I0921 22:22:23.777502  265259 addons.go:383] Verifying addon metrics-server=true in "embed-certs-20220921220439-10174"
	I0921 22:22:24.109571  265259 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0921 22:22:23.817867  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:22:26.317609  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:22:24.111462  265259 addons.go:414] enableAddons completed in 1.8551066s
	I0921 22:22:24.289051  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:22:26.789101  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:22:28.317733  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:22:30.318197  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:22:29.288966  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:22:31.289419  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:22:32.318313  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:22:34.318420  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:22:33.789632  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:22:36.289225  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:22:36.818347  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:22:39.317465  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:22:38.289442  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:22:40.789366  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:22:41.817507  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:22:43.817568  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:22:45.818320  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:22:43.289515  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:22:45.789266  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:22:47.789660  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:22:48.318077  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:22:50.318393  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:22:50.288988  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:22:52.289299  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:22:52.817366  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:22:54.818323  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:22:54.789136  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:22:56.789849  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:22:57.318147  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:22:59.818131  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:22:59.289439  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:23:01.789349  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:23:01.818325  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:23:04.318178  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:23:03.789567  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:23:06.289658  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:23:06.817937  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:23:08.818155  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:23:10.818493  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:23:08.289818  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:23:10.290273  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:23:12.789081  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:23:13.317568  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:23:15.318068  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:23:14.789291  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:23:16.789892  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:23:17.818331  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:23:20.318055  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:23:18.790028  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:23:21.289295  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:23:22.817832  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:23:24.818318  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:23:23.289408  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:23:25.789350  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:23:27.789995  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:23:27.317384  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:23:29.318499  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:23:30.288831  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:23:32.289573  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:23:31.818328  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:23:34.317921  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:23:36.318549  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:23:34.789272  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:23:36.789372  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:23:38.817288  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:23:40.818024  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:23:38.789452  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:23:41.288941  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:23:43.317555  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:23:45.817730  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:23:43.290284  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:23:45.789282  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:23:47.789698  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:23:48.318608  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:23:50.818073  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:23:50.289450  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:23:52.789754  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:23:52.818642  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:23:55.317928  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	1058c41aafbb8       d921cee849482       3 minutes ago       Exited              kindnet-cni               3                   08adfd3bc0694
	e1b3d54125fe2       1c7d8c51823b5       12 minutes ago      Running             kube-proxy                0                   c4181b02eb4c6
	2654f64b12dee       a8a176a5d5d69       12 minutes ago      Running             etcd                      0                   6d751c20960c7
	1bb50adc50b7e       97801f8394908       12 minutes ago      Running             kube-apiserver            0                   bb52b5028f7b8
	9e931b83ea689       dbfceb93c69b6       12 minutes ago      Running             kube-controller-manager   0                   e0dba18ec8f49
	8a3d458869b09       ca0ea1ee3cfd3       12 minutes ago      Running             kube-scheduler            0                   8550ad3b2adb2
	
	* 
	* ==> containerd <==
	* -- Logs begin at Wed 2022-09-21 22:11:26 UTC, end at Wed 2022-09-21 22:23:58 UTC. --
	Sep 21 22:17:14 default-k8s-different-port-20220921221118-10174 containerd[513]: time="2022-09-21T22:17:14.734694599Z" level=warning msg="cleaning up after shim disconnected" id=0cd296271116fe0d43cabed84c5faf298cf1d9b7162daeb2796de31c9e80995d namespace=k8s.io
	Sep 21 22:17:14 default-k8s-different-port-20220921221118-10174 containerd[513]: time="2022-09-21T22:17:14.734716304Z" level=info msg="cleaning up dead shim"
	Sep 21 22:17:14 default-k8s-different-port-20220921221118-10174 containerd[513]: time="2022-09-21T22:17:14.745039485Z" level=warning msg="cleanup warnings time=\"2022-09-21T22:17:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2477 runtime=io.containerd.runc.v2\n"
	Sep 21 22:17:15 default-k8s-different-port-20220921221118-10174 containerd[513]: time="2022-09-21T22:17:15.364213029Z" level=info msg="RemoveContainer for \"2d8abb3e4771063680511a2a2049ed0f4b2e8bae9c8c5229fd30401358a46f3a\""
	Sep 21 22:17:15 default-k8s-different-port-20220921221118-10174 containerd[513]: time="2022-09-21T22:17:15.371604573Z" level=info msg="RemoveContainer for \"2d8abb3e4771063680511a2a2049ed0f4b2e8bae9c8c5229fd30401358a46f3a\" returns successfully"
	Sep 21 22:17:28 default-k8s-different-port-20220921221118-10174 containerd[513]: time="2022-09-21T22:17:28.713179853Z" level=info msg="CreateContainer within sandbox \"08adfd3bc069482c3e3dfc7c5de31fc00c286cec1e929efac6d4107811253a70\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:2,}"
	Sep 21 22:17:28 default-k8s-different-port-20220921221118-10174 containerd[513]: time="2022-09-21T22:17:28.731906456Z" level=info msg="CreateContainer within sandbox \"08adfd3bc069482c3e3dfc7c5de31fc00c286cec1e929efac6d4107811253a70\" for &ContainerMetadata{Name:kindnet-cni,Attempt:2,} returns container id \"d339c4a0d22a20df8b02eea9f9b88582783e8aa76c6d41fcba90e2cf52f1acc7\""
	Sep 21 22:17:28 default-k8s-different-port-20220921221118-10174 containerd[513]: time="2022-09-21T22:17:28.732700732Z" level=info msg="StartContainer for \"d339c4a0d22a20df8b02eea9f9b88582783e8aa76c6d41fcba90e2cf52f1acc7\""
	Sep 21 22:17:28 default-k8s-different-port-20220921221118-10174 containerd[513]: time="2022-09-21T22:17:28.980582670Z" level=info msg="StartContainer for \"d339c4a0d22a20df8b02eea9f9b88582783e8aa76c6d41fcba90e2cf52f1acc7\" returns successfully"
	Sep 21 22:20:09 default-k8s-different-port-20220921221118-10174 containerd[513]: time="2022-09-21T22:20:09.502506566Z" level=info msg="shim disconnected" id=d339c4a0d22a20df8b02eea9f9b88582783e8aa76c6d41fcba90e2cf52f1acc7
	Sep 21 22:20:09 default-k8s-different-port-20220921221118-10174 containerd[513]: time="2022-09-21T22:20:09.502581489Z" level=warning msg="cleaning up after shim disconnected" id=d339c4a0d22a20df8b02eea9f9b88582783e8aa76c6d41fcba90e2cf52f1acc7 namespace=k8s.io
	Sep 21 22:20:09 default-k8s-different-port-20220921221118-10174 containerd[513]: time="2022-09-21T22:20:09.502605699Z" level=info msg="cleaning up dead shim"
	Sep 21 22:20:09 default-k8s-different-port-20220921221118-10174 containerd[513]: time="2022-09-21T22:20:09.514026043Z" level=warning msg="cleanup warnings time=\"2022-09-21T22:20:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2589 runtime=io.containerd.runc.v2\n"
	Sep 21 22:20:09 default-k8s-different-port-20220921221118-10174 containerd[513]: time="2022-09-21T22:20:09.684050023Z" level=info msg="RemoveContainer for \"0cd296271116fe0d43cabed84c5faf298cf1d9b7162daeb2796de31c9e80995d\""
	Sep 21 22:20:09 default-k8s-different-port-20220921221118-10174 containerd[513]: time="2022-09-21T22:20:09.691382521Z" level=info msg="RemoveContainer for \"0cd296271116fe0d43cabed84c5faf298cf1d9b7162daeb2796de31c9e80995d\" returns successfully"
	Sep 21 22:20:37 default-k8s-different-port-20220921221118-10174 containerd[513]: time="2022-09-21T22:20:37.709696848Z" level=info msg="CreateContainer within sandbox \"08adfd3bc069482c3e3dfc7c5de31fc00c286cec1e929efac6d4107811253a70\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:3,}"
	Sep 21 22:20:37 default-k8s-different-port-20220921221118-10174 containerd[513]: time="2022-09-21T22:20:37.722517453Z" level=info msg="CreateContainer within sandbox \"08adfd3bc069482c3e3dfc7c5de31fc00c286cec1e929efac6d4107811253a70\" for &ContainerMetadata{Name:kindnet-cni,Attempt:3,} returns container id \"1058c41aafbb87ce5fc70eca7e064de03a01eb48e2da027b4515730997b03de5\""
	Sep 21 22:20:37 default-k8s-different-port-20220921221118-10174 containerd[513]: time="2022-09-21T22:20:37.723028410Z" level=info msg="StartContainer for \"1058c41aafbb87ce5fc70eca7e064de03a01eb48e2da027b4515730997b03de5\""
	Sep 21 22:20:37 default-k8s-different-port-20220921221118-10174 containerd[513]: time="2022-09-21T22:20:37.803679482Z" level=info msg="StartContainer for \"1058c41aafbb87ce5fc70eca7e064de03a01eb48e2da027b4515730997b03de5\" returns successfully"
	Sep 21 22:23:18 default-k8s-different-port-20220921221118-10174 containerd[513]: time="2022-09-21T22:23:18.322084474Z" level=info msg="shim disconnected" id=1058c41aafbb87ce5fc70eca7e064de03a01eb48e2da027b4515730997b03de5
	Sep 21 22:23:18 default-k8s-different-port-20220921221118-10174 containerd[513]: time="2022-09-21T22:23:18.322150994Z" level=warning msg="cleaning up after shim disconnected" id=1058c41aafbb87ce5fc70eca7e064de03a01eb48e2da027b4515730997b03de5 namespace=k8s.io
	Sep 21 22:23:18 default-k8s-different-port-20220921221118-10174 containerd[513]: time="2022-09-21T22:23:18.322167635Z" level=info msg="cleaning up dead shim"
	Sep 21 22:23:18 default-k8s-different-port-20220921221118-10174 containerd[513]: time="2022-09-21T22:23:18.332704162Z" level=warning msg="cleanup warnings time=\"2022-09-21T22:23:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2706 runtime=io.containerd.runc.v2\n"
	Sep 21 22:23:19 default-k8s-different-port-20220921221118-10174 containerd[513]: time="2022-09-21T22:23:19.036383091Z" level=info msg="RemoveContainer for \"d339c4a0d22a20df8b02eea9f9b88582783e8aa76c6d41fcba90e2cf52f1acc7\""
	Sep 21 22:23:19 default-k8s-different-port-20220921221118-10174 containerd[513]: time="2022-09-21T22:23:19.042716153Z" level=info msg="RemoveContainer for \"d339c4a0d22a20df8b02eea9f9b88582783e8aa76c6d41fcba90e2cf52f1acc7\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-different-port-20220921221118-10174
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-different-port-20220921221118-10174
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=937c68716dfaac5b5ffa3b6655158d5d3472b8c4
	                    minikube.k8s.io/name=default-k8s-different-port-20220921221118-10174
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_09_21T22_11_39_0700
	                    minikube.k8s.io/version=v1.27.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 21 Sep 2022 22:11:35 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-different-port-20220921221118-10174
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 21 Sep 2022 22:23:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 21 Sep 2022 22:22:00 +0000   Wed, 21 Sep 2022 22:11:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 21 Sep 2022 22:22:00 +0000   Wed, 21 Sep 2022 22:11:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 21 Sep 2022 22:22:00 +0000   Wed, 21 Sep 2022 22:11:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 21 Sep 2022 22:22:00 +0000   Wed, 21 Sep 2022 22:11:33 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-different-port-20220921221118-10174
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871748Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871748Ki
	  pods:               110
	System Info:
	  Machine ID:                 40026c506a9948afaae3b0fda0a61c83
	  System UUID:                15db467d-fd65-4163-8719-8617da0ee9c6
	  Boot ID:                    878f3e16-9143-42f3-b848-1a740c7f26bf
	  Kernel Version:             5.15.0-1017-gcp
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.6.8
	  Kubelet Version:            v1.25.2
	  Kube-Proxy Version:         v1.25.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-default-k8s-different-port-20220921221118-10174                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-7wbpp                                                              100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      12m
	  kube-system                 kube-apiserver-default-k8s-different-port-20220921221118-10174             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-default-k8s-different-port-20220921221118-10174    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-lzphc                                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-default-k8s-different-port-20220921221118-10174             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  NodeHasSufficientMemory  12m (x5 over 12m)  kubelet          Node default-k8s-different-port-20220921221118-10174 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x4 over 12m)  kubelet          Node default-k8s-different-port-20220921221118-10174 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x4 over 12m)  kubelet          Node default-k8s-different-port-20220921221118-10174 status is now: NodeHasSufficientPID
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node default-k8s-different-port-20220921221118-10174 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node default-k8s-different-port-20220921221118-10174 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet          Node default-k8s-different-port-20220921221118-10174 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           12m                node-controller  Node default-k8s-different-port-20220921221118-10174 event: Registered Node default-k8s-different-port-20220921221118-10174 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000005] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +2.959863] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.003881] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.023897] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[Sep21 22:10] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.005087] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[Sep21 22:11] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +2.967845] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.031851] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.027935] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +2.943864] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.019893] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.023889] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	
	* 
	* ==> etcd [2654f64b12dee801006bf0c4b82dc6839422d58571d4bc74cee43fe7fdfe9b01] <==
	* {"level":"info","ts":"2022-09-21T22:11:32.803Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-09-21T22:11:32.991Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 1"}
	{"level":"info","ts":"2022-09-21T22:11:32.991Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2022-09-21T22:11:32.991Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2022-09-21T22:11:32.991Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2022-09-21T22:11:32.991Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2022-09-21T22:11:32.991Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2022-09-21T22:11:32.991Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2022-09-21T22:11:32.991Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-21T22:11:32.992Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-21T22:11:32.992Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-21T22:11:32.992Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:default-k8s-different-port-20220921221118-10174 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2022-09-21T22:11:32.992Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-21T22:11:32.992Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-09-21T22:11:32.992Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-09-21T22:11:32.992Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-09-21T22:11:32.992Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-09-21T22:11:32.994Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-09-21T22:11:32.994Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2022-09-21T22:17:27.148Z","caller":"traceutil/trace.go:171","msg":"trace[81568497] linearizableReadLoop","detail":"{readStateIndex:557; appliedIndex:557; }","duration":"144.279554ms","start":"2022-09-21T22:17:27.003Z","end":"2022-09-21T22:17:27.148Z","steps":["trace[81568497] 'read index received'  (duration: 144.269717ms)","trace[81568497] 'applied index is now lower than readState.Index'  (duration: 8.442µs)"],"step_count":2}
	{"level":"warn","ts":"2022-09-21T22:17:27.186Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"182.102976ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/default\" ","response":"range_response_count:1 size:341"}
	{"level":"info","ts":"2022-09-21T22:17:27.186Z","caller":"traceutil/trace.go:171","msg":"trace[4217368] range","detail":"{range_begin:/registry/namespaces/default; range_end:; response_count:1; response_revision:473; }","duration":"182.219248ms","start":"2022-09-21T22:17:27.003Z","end":"2022-09-21T22:17:27.186Z","steps":["trace[4217368] 'agreement among raft nodes before linearized reading'  (duration: 144.455002ms)","trace[4217368] 'range keys from in-memory index tree'  (duration: 37.603398ms)"],"step_count":2}
	{"level":"info","ts":"2022-09-21T22:17:57.177Z","caller":"traceutil/trace.go:171","msg":"trace[1122206554] transaction","detail":"{read_only:false; response_revision:485; number_of_response:1; }","duration":"166.694194ms","start":"2022-09-21T22:17:57.010Z","end":"2022-09-21T22:17:57.177Z","steps":["trace[1122206554] 'process raft request'  (duration: 75.971431ms)","trace[1122206554] 'compare'  (duration: 90.577282ms)"],"step_count":2}
	{"level":"info","ts":"2022-09-21T22:21:33.333Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":456}
	{"level":"info","ts":"2022-09-21T22:21:33.334Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":456,"took":"398.898µs"}
	
	* 
	* ==> kernel <==
	*  22:23:58 up  1:06,  0 users,  load average: 0.99, 1.30, 1.76
	Linux default-k8s-different-port-20220921221118-10174 5.15.0-1017-gcp #23~20.04.2-Ubuntu SMP Wed Aug 17 02:46:40 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kube-apiserver [1bb50adc50b7e339b9e7fee763126f6f17a621dcc78637bab78187b50e68bbf2] <==
	* I0921 22:11:35.575804       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0921 22:11:35.575972       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0921 22:11:35.576041       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0921 22:11:35.576424       1 apf_controller.go:305] Running API Priority and Fairness config worker
	I0921 22:11:35.576630       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0921 22:11:35.576649       1 cache.go:39] Caches are synced for autoregister controller
	I0921 22:11:35.592140       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0921 22:11:35.599285       1 controller.go:616] quota admission added evaluator for: namespaces
	I0921 22:11:36.242080       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0921 22:11:36.463042       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0921 22:11:36.466360       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0921 22:11:36.466387       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0921 22:11:36.848899       1 controller.go:616] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0921 22:11:36.897461       1 controller.go:616] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0921 22:11:36.989327       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0921 22:11:36.994282       1 lease.go:250] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I0921 22:11:36.995247       1 controller.go:616] quota admission added evaluator for: endpoints
	I0921 22:11:36.999018       1 controller.go:616] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0921 22:11:37.513301       1 controller.go:616] quota admission added evaluator for: serviceaccounts
	I0921 22:11:38.547983       1 controller.go:616] quota admission added evaluator for: deployments.apps
	I0921 22:11:38.554911       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0921 22:11:38.562442       1 controller.go:616] quota admission added evaluator for: daemonsets.apps
	I0921 22:11:38.625519       1 controller.go:616] quota admission added evaluator for: leases.coordination.k8s.io
	I0921 22:11:51.719506       1 controller.go:616] quota admission added evaluator for: replicasets.apps
	I0921 22:11:51.768557       1 controller.go:616] quota admission added evaluator for: controllerrevisions.apps
	
	* 
	* ==> kube-controller-manager [9e931b83ea689a63f7a3861c1e0f7076f722e68890221c6209b05b3b852c12d7] <==
	* I0921 22:11:50.915896       1 shared_informer.go:262] Caches are synced for ephemeral
	I0921 22:11:50.915920       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0921 22:11:50.915920       1 shared_informer.go:262] Caches are synced for HPA
	I0921 22:11:50.915965       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0921 22:11:50.915969       1 shared_informer.go:262] Caches are synced for daemon sets
	I0921 22:11:50.916011       1 shared_informer.go:262] Caches are synced for deployment
	I0921 22:11:50.916180       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
	I0921 22:11:50.916336       1 shared_informer.go:262] Caches are synced for job
	I0921 22:11:50.916393       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0921 22:11:50.916669       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I0921 22:11:50.919495       1 shared_informer.go:262] Caches are synced for cronjob
	I0921 22:11:50.920490       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I0921 22:11:51.021946       1 shared_informer.go:262] Caches are synced for resource quota
	I0921 22:11:51.044665       1 shared_informer.go:262] Caches are synced for attach detach
	I0921 22:11:51.072661       1 shared_informer.go:262] Caches are synced for resource quota
	I0921 22:11:51.479876       1 shared_informer.go:262] Caches are synced for garbage collector
	I0921 22:11:51.515801       1 shared_informer.go:262] Caches are synced for garbage collector
	I0921 22:11:51.515827       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0921 22:11:51.721293       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-565d847f94 to 2"
	I0921 22:11:51.774170       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-lzphc"
	I0921 22:11:51.776223       1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-7wbpp"
	I0921 22:11:51.913709       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-565d847f94 to 1 from 2"
	I0921 22:11:51.921133       1 event.go:294] "Event occurred" object="kube-system/coredns-565d847f94" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-565d847f94-hhmh6"
	I0921 22:11:51.926325       1 event.go:294] "Event occurred" object="kube-system/coredns-565d847f94" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-565d847f94-mrkjn"
	I0921 22:11:51.984101       1 event.go:294] "Event occurred" object="kube-system/coredns-565d847f94" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-565d847f94-hhmh6"
	
	* 
	* ==> kube-proxy [e1b3d54125fe2aff15e0a996ae0be62597db4065ff9a9d41a3b0e598b97b1608] <==
	* I0921 22:11:52.320700       1 node.go:163] Successfully retrieved node IP: 192.168.85.2
	I0921 22:11:52.320772       1 server_others.go:138] "Detected node IP" address="192.168.85.2"
	I0921 22:11:52.320813       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0921 22:11:52.340612       1 server_others.go:206] "Using iptables Proxier"
	I0921 22:11:52.340647       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0921 22:11:52.340656       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0921 22:11:52.340676       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0921 22:11:52.340703       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0921 22:11:52.340862       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0921 22:11:52.341069       1 server.go:661] "Version info" version="v1.25.2"
	I0921 22:11:52.341099       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0921 22:11:52.341713       1 config.go:226] "Starting endpoint slice config controller"
	I0921 22:11:52.341738       1 config.go:317] "Starting service config controller"
	I0921 22:11:52.341749       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0921 22:11:52.341752       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0921 22:11:52.341786       1 config.go:444] "Starting node config controller"
	I0921 22:11:52.341804       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0921 22:11:52.442259       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0921 22:11:52.442300       1 shared_informer.go:262] Caches are synced for service config
	I0921 22:11:52.442317       1 shared_informer.go:262] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [8a3d458869b0951d2fe3dd93e9d05a549b058f95f84b2ad0685292ebb3428767] <==
	* E0921 22:11:35.585266       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0921 22:11:35.585270       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0921 22:11:35.585278       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0921 22:11:35.585375       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0921 22:11:35.585400       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0921 22:11:35.585404       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0921 22:11:35.585402       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0921 22:11:35.585415       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0921 22:11:35.585422       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0921 22:11:35.585423       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0921 22:11:35.585324       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0921 22:11:35.585438       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0921 22:11:36.465124       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0921 22:11:36.465238       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0921 22:11:36.477413       1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0921 22:11:36.477473       1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0921 22:11:36.492805       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0921 22:11:36.492841       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0921 22:11:36.501968       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0921 22:11:36.502004       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0921 22:11:36.596856       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0921 22:11:36.596889       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0921 22:11:36.676392       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0921 22:11:36.676437       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0921 22:11:38.681580       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2022-09-21 22:11:26 UTC, end at Wed 2022-09-21 22:23:58 UTC. --
	Sep 21 22:22:39 default-k8s-different-port-20220921221118-10174 kubelet[1301]: E0921 22:22:39.073373    1301 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:22:44 default-k8s-different-port-20220921221118-10174 kubelet[1301]: E0921 22:22:44.074085    1301 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:22:49 default-k8s-different-port-20220921221118-10174 kubelet[1301]: E0921 22:22:49.075082    1301 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:22:54 default-k8s-different-port-20220921221118-10174 kubelet[1301]: E0921 22:22:54.076474    1301 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:22:59 default-k8s-different-port-20220921221118-10174 kubelet[1301]: E0921 22:22:59.077808    1301 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:23:04 default-k8s-different-port-20220921221118-10174 kubelet[1301]: E0921 22:23:04.079377    1301 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:23:09 default-k8s-different-port-20220921221118-10174 kubelet[1301]: E0921 22:23:09.080738    1301 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:23:14 default-k8s-different-port-20220921221118-10174 kubelet[1301]: E0921 22:23:14.082350    1301 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:23:19 default-k8s-different-port-20220921221118-10174 kubelet[1301]: I0921 22:23:19.035161    1301 scope.go:115] "RemoveContainer" containerID="d339c4a0d22a20df8b02eea9f9b88582783e8aa76c6d41fcba90e2cf52f1acc7"
	Sep 21 22:23:19 default-k8s-different-port-20220921221118-10174 kubelet[1301]: I0921 22:23:19.035501    1301 scope.go:115] "RemoveContainer" containerID="1058c41aafbb87ce5fc70eca7e064de03a01eb48e2da027b4515730997b03de5"
	Sep 21 22:23:19 default-k8s-different-port-20220921221118-10174 kubelet[1301]: E0921 22:23:19.035946    1301 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-7wbpp_kube-system(3f16ae0b-2f66-4f1e-b234-74570472a7f8)\"" pod="kube-system/kindnet-7wbpp" podUID=3f16ae0b-2f66-4f1e-b234-74570472a7f8
	Sep 21 22:23:19 default-k8s-different-port-20220921221118-10174 kubelet[1301]: E0921 22:23:19.084090    1301 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:23:24 default-k8s-different-port-20220921221118-10174 kubelet[1301]: E0921 22:23:24.085039    1301 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:23:29 default-k8s-different-port-20220921221118-10174 kubelet[1301]: E0921 22:23:29.085747    1301 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:23:29 default-k8s-different-port-20220921221118-10174 kubelet[1301]: I0921 22:23:29.706721    1301 scope.go:115] "RemoveContainer" containerID="1058c41aafbb87ce5fc70eca7e064de03a01eb48e2da027b4515730997b03de5"
	Sep 21 22:23:29 default-k8s-different-port-20220921221118-10174 kubelet[1301]: E0921 22:23:29.707061    1301 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-7wbpp_kube-system(3f16ae0b-2f66-4f1e-b234-74570472a7f8)\"" pod="kube-system/kindnet-7wbpp" podUID=3f16ae0b-2f66-4f1e-b234-74570472a7f8
	Sep 21 22:23:34 default-k8s-different-port-20220921221118-10174 kubelet[1301]: E0921 22:23:34.086774    1301 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:23:39 default-k8s-different-port-20220921221118-10174 kubelet[1301]: E0921 22:23:39.088167    1301 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:23:41 default-k8s-different-port-20220921221118-10174 kubelet[1301]: I0921 22:23:41.706688    1301 scope.go:115] "RemoveContainer" containerID="1058c41aafbb87ce5fc70eca7e064de03a01eb48e2da027b4515730997b03de5"
	Sep 21 22:23:41 default-k8s-different-port-20220921221118-10174 kubelet[1301]: E0921 22:23:41.706965    1301 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-7wbpp_kube-system(3f16ae0b-2f66-4f1e-b234-74570472a7f8)\"" pod="kube-system/kindnet-7wbpp" podUID=3f16ae0b-2f66-4f1e-b234-74570472a7f8
	Sep 21 22:23:44 default-k8s-different-port-20220921221118-10174 kubelet[1301]: E0921 22:23:44.089557    1301 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:23:49 default-k8s-different-port-20220921221118-10174 kubelet[1301]: E0921 22:23:49.090437    1301 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:23:53 default-k8s-different-port-20220921221118-10174 kubelet[1301]: I0921 22:23:53.706819    1301 scope.go:115] "RemoveContainer" containerID="1058c41aafbb87ce5fc70eca7e064de03a01eb48e2da027b4515730997b03de5"
	Sep 21 22:23:53 default-k8s-different-port-20220921221118-10174 kubelet[1301]: E0921 22:23:53.707078    1301 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-7wbpp_kube-system(3f16ae0b-2f66-4f1e-b234-74570472a7f8)\"" pod="kube-system/kindnet-7wbpp" podUID=3f16ae0b-2f66-4f1e-b234-74570472a7f8
	Sep 21 22:23:54 default-k8s-different-port-20220921221118-10174 kubelet[1301]: E0921 22:23:54.092015    1301 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220921221118-10174 -n default-k8s-different-port-20220921221118-10174
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-different-port-20220921221118-10174 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: busybox coredns-565d847f94-mrkjn storage-provisioner
helpers_test.go:272: ======> post-mortem[TestStartStop/group/default-k8s-different-port/serial/DeployApp]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context default-k8s-different-port-20220921221118-10174 describe pod busybox coredns-565d847f94-mrkjn storage-provisioner
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20220921221118-10174 describe pod busybox coredns-565d847f94-mrkjn storage-provisioner: exit status 1 (66.542345ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wppsb (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-wppsb:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                   From               Message
	  ----     ------            ----                  ----               -------
	  Warning  FailedScheduling  2m50s (x2 over 8m5s)  default-scheduler  0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "coredns-565d847f94-mrkjn" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context default-k8s-different-port-20220921221118-10174 describe pod busybox coredns-565d847f94-mrkjn storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/DeployApp (484.49s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (536.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-20220921220439-10174 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.2
E0921 22:17:42.748650   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/enable-default-cni-20220921215523-10174/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p embed-certs-20220921220439-10174 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.2: exit status 80 (8m54.29402588s)

                                                
                                                
-- stdout --
	* [embed-certs-20220921220439-10174] minikube v1.27.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14995
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on existing profile
	* Starting control plane node embed-certs-20220921220439-10174 in cluster embed-certs-20220921220439-10174
	* Pulling base image ...
	* Restarting existing docker container for "embed-certs-20220921220439-10174" ...
	* Preparing Kubernetes v1.25.2 on containerd 1.6.8 ...
	* Configuring CNI (Container Networking Interface) ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image docker.io/kubernetesui/dashboard:v2.6.0
	  - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	  - Using image k8s.gcr.io/echoserver:1.4
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0921 22:17:28.095328  265259 out.go:296] Setting OutFile to fd 1 ...
	I0921 22:17:28.095441  265259 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:17:28.095451  265259 out.go:309] Setting ErrFile to fd 2...
	I0921 22:17:28.095458  265259 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:17:28.095603  265259 root.go:333] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/bin
	I0921 22:17:28.096692  265259 out.go:303] Setting JSON to false
	I0921 22:17:28.098443  265259 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3599,"bootTime":1663795049,"procs":552,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1017-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0921 22:17:28.098521  265259 start.go:125] virtualization: kvm guest
	I0921 22:17:28.100841  265259 out.go:177] * [embed-certs-20220921220439-10174] minikube v1.27.0 on Ubuntu 20.04 (kvm/amd64)
	I0921 22:17:28.102450  265259 notify.go:214] Checking for updates...
	I0921 22:17:28.102459  265259 out.go:177]   - MINIKUBE_LOCATION=14995
	I0921 22:17:28.104077  265259 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0921 22:17:28.105701  265259 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
	I0921 22:17:28.107233  265259 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube
	I0921 22:17:28.108898  265259 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0921 22:17:28.110956  265259 config.go:180] Loaded profile config "embed-certs-20220921220439-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.2
	I0921 22:17:28.111500  265259 driver.go:365] Setting default libvirt URI to qemu:///system
	I0921 22:17:28.149707  265259 docker.go:137] docker version: linux-20.10.18
	I0921 22:17:28.149826  265259 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:17:28.255950  265259 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:true NGoroutines:49 SystemTime:2022-09-21 22:17:28.173805914 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1017-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.18 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0921 22:17:28.256076  265259 docker.go:254] overlay module found
	I0921 22:17:28.258061  265259 out.go:177] * Using the docker driver based on existing profile
	I0921 22:17:28.259386  265259 start.go:284] selected driver: docker
	I0921 22:17:28.259407  265259 start.go:808] validating driver "docker" against &{Name:embed-certs-20220921220439-10174 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:embed-certs-20220921220439-10174 Namespace:default APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-ho
st Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 22:17:28.259523  265259 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0921 22:17:28.260656  265259 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:17:28.380535  265259 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:true NGoroutines:49 SystemTime:2022-09-21 22:17:28.28451902 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1017-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.18 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientI
nfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0921 22:17:28.380840  265259 start_flags.go:867] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0921 22:17:28.380871  265259 cni.go:95] Creating CNI manager for ""
	I0921 22:17:28.380888  265259 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0921 22:17:28.380912  265259 start_flags.go:316] config:
	{Name:embed-certs-20220921220439-10174 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:embed-certs-20220921220439-10174 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRun
time:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 22:17:28.384085  265259 out.go:177] * Starting control plane node embed-certs-20220921220439-10174 in cluster embed-certs-20220921220439-10174
	I0921 22:17:28.385882  265259 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0921 22:17:28.387398  265259 out.go:177] * Pulling base image ...
	I0921 22:17:28.388993  265259 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime containerd
	I0921 22:17:28.389048  265259 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.2-containerd-overlay2-amd64.tar.lz4
	I0921 22:17:28.389066  265259 cache.go:57] Caching tarball of preloaded images
	I0921 22:17:28.389097  265259 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	I0921 22:17:28.389318  265259 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0921 22:17:28.389342  265259 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.2 on containerd
	I0921 22:17:28.389498  265259 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/embed-certs-20220921220439-10174/config.json ...
	I0921 22:17:28.427244  265259 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon, skipping pull
	I0921 22:17:28.427281  265259 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c exists in daemon, skipping load
	I0921 22:17:28.427306  265259 cache.go:208] Successfully downloaded all kic artifacts
	I0921 22:17:28.427355  265259 start.go:364] acquiring machines lock for embed-certs-20220921220439-10174: {Name:mk045ddc97e52cc6fb76c850f85eeab9304c52af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:17:28.427484  265259 start.go:368] acquired machines lock for "embed-certs-20220921220439-10174" in 87.485µs
	I0921 22:17:28.427521  265259 start.go:96] Skipping create...Using existing machine configuration
	I0921 22:17:28.427530  265259 fix.go:55] fixHost starting: 
	I0921 22:17:28.428036  265259 cli_runner.go:164] Run: docker container inspect embed-certs-20220921220439-10174 --format={{.State.Status}}
	I0921 22:17:28.491875  265259 fix.go:103] recreateIfNeeded on embed-certs-20220921220439-10174: state=Stopped err=<nil>
	W0921 22:17:28.491923  265259 fix.go:129] unexpected machine state, will restart: <nil>
	I0921 22:17:28.494840  265259 out.go:177] * Restarting existing docker container for "embed-certs-20220921220439-10174" ...
	I0921 22:17:28.496605  265259 cli_runner.go:164] Run: docker start embed-certs-20220921220439-10174
	I0921 22:17:28.935510  265259 cli_runner.go:164] Run: docker container inspect embed-certs-20220921220439-10174 --format={{.State.Status}}
	I0921 22:17:28.967147  265259 kic.go:415] container "embed-certs-20220921220439-10174" state is running.
	I0921 22:17:28.967610  265259 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220921220439-10174
	I0921 22:17:28.999797  265259 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/embed-certs-20220921220439-10174/config.json ...
	I0921 22:17:29.000026  265259 machine.go:88] provisioning docker machine ...
	I0921 22:17:29.000053  265259 ubuntu.go:169] provisioning hostname "embed-certs-20220921220439-10174"
	I0921 22:17:29.000099  265259 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220439-10174
	I0921 22:17:29.029062  265259 main.go:134] libmachine: Using SSH client type: native
	I0921 22:17:29.029272  265259 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ec520] 0x7ef6a0 <nil>  [] 0s} 127.0.0.1 49428 <nil> <nil>}
	I0921 22:17:29.029292  265259 main.go:134] libmachine: About to run SSH command:
	sudo hostname embed-certs-20220921220439-10174 && echo "embed-certs-20220921220439-10174" | sudo tee /etc/hostname
	I0921 22:17:29.029905  265259 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37218->127.0.0.1:49428: read: connection reset by peer
	I0921 22:17:32.172978  265259 main.go:134] libmachine: SSH cmd err, output: <nil>: embed-certs-20220921220439-10174
	
	I0921 22:17:32.173082  265259 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220439-10174
	I0921 22:17:32.198531  265259 main.go:134] libmachine: Using SSH client type: native
	I0921 22:17:32.198680  265259 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ec520] 0x7ef6a0 <nil>  [] 0s} 127.0.0.1 49428 <nil> <nil>}
	I0921 22:17:32.198700  265259 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-20220921220439-10174' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-20220921220439-10174/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-20220921220439-10174' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0921 22:17:32.327560  265259 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0921 22:17:32.327589  265259 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube}
	I0921 22:17:32.327607  265259 ubuntu.go:177] setting up certificates
	I0921 22:17:32.327618  265259 provision.go:83] configureAuth start
	I0921 22:17:32.327680  265259 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220921220439-10174
	I0921 22:17:32.352253  265259 provision.go:138] copyHostCerts
	I0921 22:17:32.352307  265259 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.pem, removing ...
	I0921 22:17:32.352320  265259 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.pem
	I0921 22:17:32.352391  265259 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.pem (1078 bytes)
	I0921 22:17:32.352494  265259 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cert.pem, removing ...
	I0921 22:17:32.352508  265259 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cert.pem
	I0921 22:17:32.352545  265259 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cert.pem (1123 bytes)
	I0921 22:17:32.352635  265259 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/key.pem, removing ...
	I0921 22:17:32.352649  265259 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/key.pem
	I0921 22:17:32.352701  265259 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/key.pem (1675 bytes)
	I0921 22:17:32.352783  265259 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca-key.pem org=jenkins.embed-certs-20220921220439-10174 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-20220921220439-10174]
	I0921 22:17:32.479832  265259 provision.go:172] copyRemoteCerts
	I0921 22:17:32.479906  265259 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0921 22:17:32.479939  265259 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220439-10174
	I0921 22:17:32.505501  265259 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49428 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/embed-certs-20220921220439-10174/id_rsa Username:docker}
	I0921 22:17:32.599511  265259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0921 22:17:32.616966  265259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0921 22:17:32.634090  265259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server.pem --> /etc/docker/server.pem (1269 bytes)
	I0921 22:17:32.651113  265259 provision.go:86] duration metric: configureAuth took 323.483962ms
	I0921 22:17:32.651174  265259 ubuntu.go:193] setting minikube options for container-runtime
	I0921 22:17:32.651338  265259 config.go:180] Loaded profile config "embed-certs-20220921220439-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.2
	I0921 22:17:32.651350  265259 machine.go:91] provisioned docker machine in 3.651308712s
	I0921 22:17:32.651357  265259 start.go:300] post-start starting for "embed-certs-20220921220439-10174" (driver="docker")
	I0921 22:17:32.651363  265259 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0921 22:17:32.651407  265259 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0921 22:17:32.651448  265259 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220439-10174
	I0921 22:17:32.676272  265259 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49428 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/embed-certs-20220921220439-10174/id_rsa Username:docker}
	I0921 22:17:32.767699  265259 ssh_runner.go:195] Run: cat /etc/os-release
	I0921 22:17:32.770407  265259 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0921 22:17:32.770431  265259 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0921 22:17:32.770441  265259 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0921 22:17:32.770449  265259 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0921 22:17:32.770465  265259 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/addons for local assets ...
	I0921 22:17:32.770523  265259 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files for local assets ...
	I0921 22:17:32.770611  265259 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/101742.pem -> 101742.pem in /etc/ssl/certs
	I0921 22:17:32.770708  265259 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0921 22:17:32.777703  265259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/101742.pem --> /etc/ssl/certs/101742.pem (1708 bytes)
	I0921 22:17:32.795446  265259 start.go:303] post-start completed in 144.0765ms
	I0921 22:17:32.795526  265259 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:17:32.795573  265259 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220439-10174
	I0921 22:17:32.824051  265259 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49428 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/embed-certs-20220921220439-10174/id_rsa Username:docker}
	I0921 22:17:32.912244  265259 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:17:32.916108  265259 fix.go:57] fixHost completed within 4.488574812s
	I0921 22:17:32.916130  265259 start.go:83] releasing machines lock for "embed-certs-20220921220439-10174", held for 4.488624344s
	I0921 22:17:32.916209  265259 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220921220439-10174
	I0921 22:17:32.940061  265259 ssh_runner.go:195] Run: systemctl --version
	I0921 22:17:32.940115  265259 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220439-10174
	I0921 22:17:32.940169  265259 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0921 22:17:32.940248  265259 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220439-10174
	I0921 22:17:32.966455  265259 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49428 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/embed-certs-20220921220439-10174/id_rsa Username:docker}
	I0921 22:17:32.968525  265259 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49428 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/embed-certs-20220921220439-10174/id_rsa Username:docker}
	I0921 22:17:33.084107  265259 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0921 22:17:33.095318  265259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0921 22:17:33.104913  265259 docker.go:188] disabling docker service ...
	I0921 22:17:33.104971  265259 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0921 22:17:33.114753  265259 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0921 22:17:33.124153  265259 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0921 22:17:33.204367  265259 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0921 22:17:33.283920  265259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0921 22:17:33.292993  265259 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0921 22:17:33.305627  265259 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "registry.k8s.io/pause:3.8"|' -i /etc/containerd/config.toml"
	I0921 22:17:33.313820  265259 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0921 22:17:33.321957  265259 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0921 22:17:33.329812  265259 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0921 22:17:33.337732  265259 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0921 22:17:33.343799  265259 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0921 22:17:33.349955  265259 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0921 22:17:33.428013  265259 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0921 22:17:33.501914  265259 start.go:450] Will wait 60s for socket path /run/containerd/containerd.sock
	I0921 22:17:33.501983  265259 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0921 22:17:33.505725  265259 start.go:471] Will wait 60s for crictl version
	I0921 22:17:33.505786  265259 ssh_runner.go:195] Run: sudo crictl version
	I0921 22:17:33.531812  265259 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-09-21T22:17:33Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0921 22:17:44.578649  265259 ssh_runner.go:195] Run: sudo crictl version
	I0921 22:17:44.603417  265259 start.go:480] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.8
	RuntimeApiVersion:  v1alpha2
	I0921 22:17:44.603490  265259 ssh_runner.go:195] Run: containerd --version
	I0921 22:17:44.632526  265259 ssh_runner.go:195] Run: containerd --version
	I0921 22:17:44.662710  265259 out.go:177] * Preparing Kubernetes v1.25.2 on containerd 1.6.8 ...
	I0921 22:17:44.664089  265259 cli_runner.go:164] Run: docker network inspect embed-certs-20220921220439-10174 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 22:17:44.689227  265259 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0921 22:17:44.692595  265259 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0921 22:17:44.702063  265259 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime containerd
	I0921 22:17:44.702165  265259 ssh_runner.go:195] Run: sudo crictl images --output json
	I0921 22:17:44.726529  265259 containerd.go:553] all images are preloaded for containerd runtime.
	I0921 22:17:44.726554  265259 containerd.go:467] Images already preloaded, skipping extraction
	I0921 22:17:44.726593  265259 ssh_runner.go:195] Run: sudo crictl images --output json
	I0921 22:17:44.749297  265259 containerd.go:553] all images are preloaded for containerd runtime.
	I0921 22:17:44.749323  265259 cache_images.go:84] Images are preloaded, skipping loading
	I0921 22:17:44.749370  265259 ssh_runner.go:195] Run: sudo crictl info
	I0921 22:17:44.772333  265259 cni.go:95] Creating CNI manager for ""
	I0921 22:17:44.772359  265259 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0921 22:17:44.772370  265259 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0921 22:17:44.772382  265259 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.25.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-20220921220439-10174 NodeName:embed-certs-20220921220439-10174 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/mi
nikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
	I0921 22:17:44.772506  265259 kubeadm.go:161] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "embed-certs-20220921220439-10174"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0921 22:17:44.772588  265259 kubeadm.go:962] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=embed-certs-20220921220439-10174 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.2 ClusterName:embed-certs-20220921220439-10174 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0921 22:17:44.772641  265259 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.2
	I0921 22:17:44.780191  265259 binaries.go:44] Found k8s binaries, skipping transfer
	I0921 22:17:44.780285  265259 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0921 22:17:44.787214  265259 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (525 bytes)
	I0921 22:17:44.799829  265259 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0921 22:17:44.812972  265259 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2061 bytes)
	I0921 22:17:44.825894  265259 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0921 22:17:44.828794  265259 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0921 22:17:44.837837  265259 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/embed-certs-20220921220439-10174 for IP: 192.168.67.2
	I0921 22:17:44.837950  265259 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.key
	I0921 22:17:44.838007  265259 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/proxy-client-ca.key
	I0921 22:17:44.838099  265259 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/embed-certs-20220921220439-10174/client.key
	I0921 22:17:44.838174  265259 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/embed-certs-20220921220439-10174/apiserver.key.c7fa3a9e
	I0921 22:17:44.838228  265259 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/embed-certs-20220921220439-10174/proxy-client.key
	I0921 22:17:44.838369  265259 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/10174.pem (1338 bytes)
	W0921 22:17:44.838407  265259 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/10174_empty.pem, impossibly tiny 0 bytes
	I0921 22:17:44.838422  265259 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca-key.pem (1679 bytes)
	I0921 22:17:44.838462  265259 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem (1078 bytes)
	I0921 22:17:44.838494  265259 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/cert.pem (1123 bytes)
	I0921 22:17:44.838553  265259 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/key.pem (1675 bytes)
	I0921 22:17:44.838610  265259 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/101742.pem (1708 bytes)
	I0921 22:17:44.839428  265259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/embed-certs-20220921220439-10174/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0921 22:17:44.856493  265259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/embed-certs-20220921220439-10174/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0921 22:17:44.873011  265259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/embed-certs-20220921220439-10174/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0921 22:17:44.890446  265259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/embed-certs-20220921220439-10174/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0921 22:17:44.907310  265259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0921 22:17:44.925406  265259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0921 22:17:44.942040  265259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0921 22:17:44.959497  265259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0921 22:17:44.976261  265259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0921 22:17:44.993274  265259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/10174.pem --> /usr/share/ca-certificates/10174.pem (1338 bytes)
	I0921 22:17:45.012129  265259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/101742.pem --> /usr/share/ca-certificates/101742.pem (1708 bytes)
	I0921 22:17:45.030120  265259 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0921 22:17:45.045295  265259 ssh_runner.go:195] Run: openssl version
	I0921 22:17:45.050495  265259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0921 22:17:45.057768  265259 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0921 22:17:45.060687  265259 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Sep 21 21:28 /usr/share/ca-certificates/minikubeCA.pem
	I0921 22:17:45.060736  265259 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0921 22:17:45.065570  265259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0921 22:17:45.072375  265259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10174.pem && ln -fs /usr/share/ca-certificates/10174.pem /etc/ssl/certs/10174.pem"
	I0921 22:17:45.079434  265259 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10174.pem
	I0921 22:17:45.082825  265259 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Sep 21 21:32 /usr/share/ca-certificates/10174.pem
	I0921 22:17:45.082871  265259 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10174.pem
	I0921 22:17:45.087832  265259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10174.pem /etc/ssl/certs/51391683.0"
	I0921 22:17:45.094851  265259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/101742.pem && ln -fs /usr/share/ca-certificates/101742.pem /etc/ssl/certs/101742.pem"
	I0921 22:17:45.102146  265259 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/101742.pem
	I0921 22:17:45.105205  265259 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Sep 21 21:32 /usr/share/ca-certificates/101742.pem
	I0921 22:17:45.105254  265259 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/101742.pem
	I0921 22:17:45.110002  265259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/101742.pem /etc/ssl/certs/3ec20f2e.0"
	I0921 22:17:45.116995  265259 kubeadm.go:396] StartCluster: {Name:embed-certs-20220921220439-10174 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:embed-certs-20220921220439-10174 Namespace:default APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 22:17:45.117099  265259 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0921 22:17:45.117145  265259 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0921 22:17:45.142744  265259 cri.go:87] found id: "14ddc4cc7c5544ae59173e8c2d09fea3c0bddc49b7c2a7ecf8ccf45daab86f43"
	I0921 22:17:45.142772  265259 cri.go:87] found id: "35601481c1b925f8785d0730679803d265cbc261ddf4ca9086c08ec08ce8d10c"
	I0921 22:17:45.142781  265259 cri.go:87] found id: "2c132c99660ac3b6987754acaccbc87f631bc9ffc4dade2b77ad96eef8d04334"
	I0921 22:17:45.142790  265259 cri.go:87] found id: "4c0ef4a5b32546e86e84aa28e2b53370eb4c462d47208c2f4053d8a94da4e5d0"
	I0921 22:17:45.142798  265259 cri.go:87] found id: "6dc0cbf3dcda3fe8512f7ac309f5980e6a0d33dedf1aafdf4d79890ef21016e9"
	I0921 22:17:45.142807  265259 cri.go:87] found id: "07e2b5e608591e85913066c0986a5bd5ea1bf1e68a095ae9fea95c89af2a5837"
	I0921 22:17:45.142821  265259 cri.go:87] found id: "50596ff38ce686be7705ce5777cdda9e90065d702d371e9f4371f62b19f49c34"
	I0921 22:17:45.142836  265259 cri.go:87] found id: ""
	I0921 22:17:45.142884  265259 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0921 22:17:45.155473  265259 cri.go:114] JSON = null
	W0921 22:17:45.155529  265259 kubeadm.go:403] unpause failed: list paused: list returned 0 containers, but ps returned 7
	I0921 22:17:45.155604  265259 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0921 22:17:45.162582  265259 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I0921 22:17:45.162607  265259 kubeadm.go:627] restartCluster start
	I0921 22:17:45.162645  265259 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0921 22:17:45.168940  265259 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:17:45.169601  265259 kubeconfig.go:116] verify returned: extract IP: "embed-certs-20220921220439-10174" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
	I0921 22:17:45.169926  265259 kubeconfig.go:127] "embed-certs-20220921220439-10174" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig - will repair!
	I0921 22:17:45.170545  265259 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig: {Name:mkdec5faf1bb182b50a3cd5458b352f38075dc15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:17:45.171850  265259 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0921 22:17:45.178290  265259 api_server.go:165] Checking apiserver status ...
	I0921 22:17:45.178333  265259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:17:45.186215  265259 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:17:45.386491  265259 api_server.go:165] Checking apiserver status ...
	I0921 22:17:45.386599  265259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:17:45.395200  265259 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:17:45.586419  265259 api_server.go:165] Checking apiserver status ...
	I0921 22:17:45.586498  265259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:17:45.595465  265259 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:17:45.786750  265259 api_server.go:165] Checking apiserver status ...
	I0921 22:17:45.786830  265259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:17:45.795288  265259 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:17:45.986611  265259 api_server.go:165] Checking apiserver status ...
	I0921 22:17:45.986670  265259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:17:45.995951  265259 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:17:46.187246  265259 api_server.go:165] Checking apiserver status ...
	I0921 22:17:46.187330  265259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:17:46.196226  265259 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:17:46.386390  265259 api_server.go:165] Checking apiserver status ...
	I0921 22:17:46.386547  265259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:17:46.395268  265259 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:17:46.586512  265259 api_server.go:165] Checking apiserver status ...
	I0921 22:17:46.586601  265259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:17:46.595504  265259 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:17:46.786819  265259 api_server.go:165] Checking apiserver status ...
	I0921 22:17:46.786913  265259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:17:46.795481  265259 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:17:46.986759  265259 api_server.go:165] Checking apiserver status ...
	I0921 22:17:46.986825  265259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:17:46.994799  265259 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:17:47.187025  265259 api_server.go:165] Checking apiserver status ...
	I0921 22:17:47.187122  265259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:17:47.195517  265259 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:17:47.386694  265259 api_server.go:165] Checking apiserver status ...
	I0921 22:17:47.386789  265259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:17:47.396080  265259 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:17:47.586359  265259 api_server.go:165] Checking apiserver status ...
	I0921 22:17:47.586453  265259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:17:47.595150  265259 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:17:47.786376  265259 api_server.go:165] Checking apiserver status ...
	I0921 22:17:47.786467  265259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:17:47.795267  265259 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:17:47.986511  265259 api_server.go:165] Checking apiserver status ...
	I0921 22:17:47.986591  265259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:17:47.995293  265259 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:17:48.186937  265259 api_server.go:165] Checking apiserver status ...
	I0921 22:17:48.187017  265259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:17:48.195497  265259 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:17:48.195521  265259 api_server.go:165] Checking apiserver status ...
	I0921 22:17:48.195561  265259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:17:48.203343  265259 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:17:48.203370  265259 kubeadm.go:602] needs reconfigure: apiserver error: timed out waiting for the condition
	I0921 22:17:48.203377  265259 kubeadm.go:1114] stopping kube-system containers ...
	I0921 22:17:48.203387  265259 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0921 22:17:48.203432  265259 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0921 22:17:48.227626  265259 cri.go:87] found id: "14ddc4cc7c5544ae59173e8c2d09fea3c0bddc49b7c2a7ecf8ccf45daab86f43"
	I0921 22:17:48.227658  265259 cri.go:87] found id: "35601481c1b925f8785d0730679803d265cbc261ddf4ca9086c08ec08ce8d10c"
	I0921 22:17:48.227665  265259 cri.go:87] found id: "2c132c99660ac3b6987754acaccbc87f631bc9ffc4dade2b77ad96eef8d04334"
	I0921 22:17:48.227671  265259 cri.go:87] found id: "4c0ef4a5b32546e86e84aa28e2b53370eb4c462d47208c2f4053d8a94da4e5d0"
	I0921 22:17:48.227677  265259 cri.go:87] found id: "6dc0cbf3dcda3fe8512f7ac309f5980e6a0d33dedf1aafdf4d79890ef21016e9"
	I0921 22:17:48.227683  265259 cri.go:87] found id: "07e2b5e608591e85913066c0986a5bd5ea1bf1e68a095ae9fea95c89af2a5837"
	I0921 22:17:48.227689  265259 cri.go:87] found id: "50596ff38ce686be7705ce5777cdda9e90065d702d371e9f4371f62b19f49c34"
	I0921 22:17:48.227694  265259 cri.go:87] found id: ""
	I0921 22:17:48.227702  265259 cri.go:232] Stopping containers: [14ddc4cc7c5544ae59173e8c2d09fea3c0bddc49b7c2a7ecf8ccf45daab86f43 35601481c1b925f8785d0730679803d265cbc261ddf4ca9086c08ec08ce8d10c 2c132c99660ac3b6987754acaccbc87f631bc9ffc4dade2b77ad96eef8d04334 4c0ef4a5b32546e86e84aa28e2b53370eb4c462d47208c2f4053d8a94da4e5d0 6dc0cbf3dcda3fe8512f7ac309f5980e6a0d33dedf1aafdf4d79890ef21016e9 07e2b5e608591e85913066c0986a5bd5ea1bf1e68a095ae9fea95c89af2a5837 50596ff38ce686be7705ce5777cdda9e90065d702d371e9f4371f62b19f49c34]
	I0921 22:17:48.227783  265259 ssh_runner.go:195] Run: which crictl
	I0921 22:17:48.230680  265259 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop 14ddc4cc7c5544ae59173e8c2d09fea3c0bddc49b7c2a7ecf8ccf45daab86f43 35601481c1b925f8785d0730679803d265cbc261ddf4ca9086c08ec08ce8d10c 2c132c99660ac3b6987754acaccbc87f631bc9ffc4dade2b77ad96eef8d04334 4c0ef4a5b32546e86e84aa28e2b53370eb4c462d47208c2f4053d8a94da4e5d0 6dc0cbf3dcda3fe8512f7ac309f5980e6a0d33dedf1aafdf4d79890ef21016e9 07e2b5e608591e85913066c0986a5bd5ea1bf1e68a095ae9fea95c89af2a5837 50596ff38ce686be7705ce5777cdda9e90065d702d371e9f4371f62b19f49c34
	I0921 22:17:48.256773  265259 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0921 22:17:48.266827  265259 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0921 22:17:48.273887  265259 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Sep 21 22:04 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Sep 21 22:04 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2067 Sep 21 22:05 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Sep 21 22:04 /etc/kubernetes/scheduler.conf
	
	I0921 22:17:48.273943  265259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0921 22:17:48.280586  265259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0921 22:17:48.286987  265259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0921 22:17:48.293523  265259 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:17:48.293571  265259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0921 22:17:48.299964  265259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0921 22:17:48.306477  265259 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:17:48.306527  265259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0921 22:17:48.312739  265259 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0921 22:17:48.319492  265259 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0921 22:17:48.319514  265259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0921 22:17:48.363948  265259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0921 22:17:49.408818  265259 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.044833146s)
	I0921 22:17:49.408852  265259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0921 22:17:49.548151  265259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0921 22:17:49.603091  265259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0921 22:17:49.702755  265259 api_server.go:51] waiting for apiserver process to appear ...
	I0921 22:17:49.702823  265259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 22:17:50.211916  265259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 22:17:50.711526  265259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 22:17:50.783714  265259 api_server.go:71] duration metric: took 1.080964393s to wait for apiserver process to appear ...
	I0921 22:17:50.783788  265259 api_server.go:87] waiting for apiserver healthz status ...
	I0921 22:17:50.783802  265259 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0921 22:17:50.784206  265259 api_server.go:256] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
	I0921 22:17:51.284903  265259 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0921 22:17:54.610969  265259 api_server.go:266] https://192.168.67.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0921 22:17:54.611048  265259 api_server.go:102] status: https://192.168.67.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0921 22:17:54.784333  265259 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0921 22:17:54.800998  265259 api_server.go:266] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0921 22:17:54.801129  265259 api_server.go:102] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0921 22:17:55.284900  265259 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0921 22:17:55.289701  265259 api_server.go:266] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0921 22:17:55.289739  265259 api_server.go:102] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0921 22:17:55.784932  265259 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0921 22:17:55.792241  265259 api_server.go:266] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0921 22:17:55.792279  265259 api_server.go:102] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0921 22:17:56.284759  265259 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0921 22:17:56.290082  265259 api_server.go:266] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0921 22:17:56.296022  265259 api_server.go:140] control plane version: v1.25.2
	I0921 22:17:56.296045  265259 api_server.go:130] duration metric: took 5.512251145s to wait for apiserver health ...
	I0921 22:17:56.296053  265259 cni.go:95] Creating CNI manager for ""
	I0921 22:17:56.296060  265259 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0921 22:17:56.298633  265259 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0921 22:17:56.300086  265259 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0921 22:17:56.304617  265259 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.2/kubectl ...
	I0921 22:17:56.304640  265259 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0921 22:17:56.318502  265259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0921 22:17:57.199402  265259 system_pods.go:43] waiting for kube-system pods to appear ...
	I0921 22:17:57.305627  265259 system_pods.go:59] 9 kube-system pods found
	I0921 22:17:57.305665  265259 system_pods.go:61] "coredns-565d847f94-qn9gp" [20daa866-cc16-40c4-b313-f3e428a9a8a5] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0921 22:17:57.305675  265259 system_pods.go:61] "etcd-embed-certs-20220921220439-10174" [3c71191d-38cb-4a6e-bd7e-63e379d3b43b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0921 22:17:57.305684  265259 system_pods.go:61] "kindnet-mqr9d" [1dcc030c-e4fc-498d-a309-94f66d79cd24] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0921 22:17:57.305691  265259 system_pods.go:61] "kube-apiserver-embed-certs-20220921220439-10174" [ab1de1f3-dfd5-4913-8ae4-a74a89c93e1b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0921 22:17:57.305699  265259 system_pods.go:61] "kube-controller-manager-embed-certs-20220921220439-10174" [193b25e0-66d6-4b09-ad3a-c15f10a8a5a2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0921 22:17:57.305708  265259 system_pods.go:61] "kube-proxy-s7c85" [8fbb5ba1-1742-4f87-9204-633c80ba11ca] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0921 22:17:57.305718  265259 system_pods.go:61] "kube-scheduler-embed-certs-20220921220439-10174" [cd6af3eb-d9d4-4566-bdc1-e1f02abf8bba] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0921 22:17:57.305727  265259 system_pods.go:61] "metrics-server-5c8fd5cf8-6pjvf" [b38e30a3-1492-4071-b459-fb3165d178fb] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0921 22:17:57.305736  265259 system_pods.go:61] "storage-provisioner" [7d390d04-79f5-4f72-ba86-173266540d5c] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0921 22:17:57.305745  265259 system_pods.go:74] duration metric: took 106.317139ms to wait for pod list to return data ...
	I0921 22:17:57.305759  265259 node_conditions.go:102] verifying NodePressure condition ...
	I0921 22:17:57.309232  265259 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0921 22:17:57.309258  265259 node_conditions.go:123] node cpu capacity is 8
	I0921 22:17:57.309269  265259 node_conditions.go:105] duration metric: took 3.502309ms to run NodePressure ...
	I0921 22:17:57.309285  265259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0921 22:17:57.462402  265259 kubeadm.go:763] waiting for restarted kubelet to initialise ...
	I0921 22:17:57.466460  265259 kubeadm.go:778] kubelet initialised
	I0921 22:17:57.466482  265259 kubeadm.go:779] duration metric: took 4.057626ms waiting for restarted kubelet to initialise ...
	I0921 22:17:57.466489  265259 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0921 22:17:57.471599  265259 pod_ready.go:78] waiting up to 4m0s for pod "coredns-565d847f94-qn9gp" in "kube-system" namespace to be "Ready" ...
	I0921 22:17:59.476817  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:18:01.477612  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:18:03.977007  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:18:06.477048  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:18:08.977213  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:18:10.977540  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:18:12.977891  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:18:14.979586  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:18:17.477557  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:18:19.977758  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:18:21.977804  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:18:24.478339  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:18:26.978336  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:18:29.476882  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:18:31.478356  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:18:33.976605  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:18:35.976878  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:18:38.477434  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:18:40.478033  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:18:42.977925  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:18:45.477571  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:18:47.977615  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:18:49.977882  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:18:52.477206  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:18:54.477844  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:18:56.976782  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:18:58.977180  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:19:01.477897  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:19:03.977685  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:19:06.477200  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:19:08.976722  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:19:10.977610  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:19:13.476627  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:19:15.477519  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:19:17.977105  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:19:19.977626  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:19:22.477263  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:19:24.477637  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:19:26.977251  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:19:29.476688  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:19:31.477764  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:19:33.976991  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:19:35.977033  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:19:37.977441  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:19:40.477326  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:19:42.977381  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:19:44.977553  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:19:47.476449  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:19:49.477452  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:19:51.977396  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:19:54.477645  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:19:56.977376  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:19:59.477995  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:20:01.977425  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:20:03.977726  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:20:06.477894  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:20:08.977434  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:20:10.978166  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:20:13.477792  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:20:15.977558  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:20:18.477544  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:20:20.977066  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:20:22.977332  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:20:25.477402  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:20:27.977246  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:20:29.977370  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:20:32.477012  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:20:34.977231  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:20:36.977848  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:20:39.477873  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:20:41.976860  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:20:43.977103  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:20:45.977754  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:20:48.477044  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:20:50.477628  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:20:52.977796  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:20:55.476953  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:20:57.977440  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:20:59.977475  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:21:02.477190  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:21:04.977117  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:21:07.476968  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:21:09.477119  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:21:11.477503  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:21:13.477643  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:21:15.976936  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:21:17.977762  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:21:19.977941  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:21:22.477413  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:21:24.477809  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:21:26.976994  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:21:28.977565  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:21:31.477682  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:21:33.976943  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:21:36.476620  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:21:38.477722  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:21:40.976919  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:21:42.977016  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:21:45.477739  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:21:47.976841  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:21:50.476802  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:21:52.977118  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:21:55.477148  265259 pod_ready.go:102] pod "coredns-565d847f94-qn9gp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:05:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:21:57.473940  265259 pod_ready.go:81] duration metric: took 4m0.002309063s waiting for pod "coredns-565d847f94-qn9gp" in "kube-system" namespace to be "Ready" ...
	E0921 22:21:57.473968  265259 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-565d847f94-qn9gp" in "kube-system" namespace to be "Ready" (will not retry!)
	I0921 22:21:57.473989  265259 pod_ready.go:38] duration metric: took 4m0.007491689s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0921 22:21:57.474010  265259 kubeadm.go:631] restartCluster took 4m12.311396089s
	W0921 22:21:57.474123  265259 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0921 22:21:57.474151  265259 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0921 22:22:00.342329  265259 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (2.868152928s)
	I0921 22:22:00.342387  265259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0921 22:22:00.351706  265259 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0921 22:22:00.358843  265259 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0921 22:22:00.358897  265259 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0921 22:22:00.365576  265259 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0921 22:22:00.365616  265259 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0921 22:22:00.405287  265259 kubeadm.go:317] [init] Using Kubernetes version: v1.25.2
	I0921 22:22:00.405348  265259 kubeadm.go:317] [preflight] Running pre-flight checks
	I0921 22:22:00.433369  265259 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I0921 22:22:00.433451  265259 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1017-gcp
	I0921 22:22:00.433486  265259 kubeadm.go:317] OS: Linux
	I0921 22:22:00.433611  265259 kubeadm.go:317] CGROUPS_CPU: enabled
	I0921 22:22:00.433682  265259 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I0921 22:22:00.433726  265259 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I0921 22:22:00.433768  265259 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I0921 22:22:00.433805  265259 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I0921 22:22:00.433852  265259 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I0921 22:22:00.433893  265259 kubeadm.go:317] CGROUPS_PIDS: enabled
	I0921 22:22:00.434000  265259 kubeadm.go:317] CGROUPS_HUGETLB: enabled
	I0921 22:22:00.434102  265259 kubeadm.go:317] CGROUPS_BLKIO: enabled
	I0921 22:22:00.502463  265259 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0921 22:22:00.502591  265259 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0921 22:22:00.502721  265259 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0921 22:22:00.621941  265259 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0921 22:22:00.626833  265259 out.go:204]   - Generating certificates and keys ...
	I0921 22:22:00.626978  265259 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0921 22:22:00.627053  265259 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0921 22:22:00.627158  265259 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0921 22:22:00.627246  265259 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I0921 22:22:00.627351  265259 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I0921 22:22:00.627410  265259 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I0921 22:22:00.627483  265259 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I0921 22:22:00.627551  265259 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I0921 22:22:00.627613  265259 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0921 22:22:00.627685  265259 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0921 22:22:00.627760  265259 kubeadm.go:317] [certs] Using the existing "sa" key
	I0921 22:22:00.627816  265259 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0921 22:22:00.721598  265259 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0921 22:22:00.898538  265259 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0921 22:22:00.999773  265259 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0921 22:22:01.056843  265259 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0921 22:22:01.068556  265259 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0921 22:22:01.069535  265259 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0921 22:22:01.069603  265259 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I0921 22:22:01.152435  265259 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0921 22:22:01.154531  265259 out.go:204]   - Booting up control plane ...
	I0921 22:22:01.154652  265259 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0921 22:22:01.154956  265259 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0921 22:22:01.156705  265259 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0921 22:22:01.157879  265259 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0921 22:22:01.159675  265259 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0921 22:22:07.161939  265259 kubeadm.go:317] [apiclient] All control plane components are healthy after 6.002148 seconds
	I0921 22:22:07.162112  265259 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0921 22:22:07.170819  265259 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0921 22:22:07.689049  265259 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs
	I0921 22:22:07.689253  265259 kubeadm.go:317] [mark-control-plane] Marking the node embed-certs-20220921220439-10174 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0921 22:22:08.196658  265259 kubeadm.go:317] [bootstrap-token] Using token: 6acdlb.hwh133k5t8mfdxv9
	I0921 22:22:08.198133  265259 out.go:204]   - Configuring RBAC rules ...
	I0921 22:22:08.198241  265259 kubeadm.go:317] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0921 22:22:08.202013  265259 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0921 22:22:08.206522  265259 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0921 22:22:08.208686  265259 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0921 22:22:08.210651  265259 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0921 22:22:08.212506  265259 kubeadm.go:317] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0921 22:22:08.219466  265259 kubeadm.go:317] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0921 22:22:08.394068  265259 kubeadm.go:317] [addons] Applied essential addon: CoreDNS
	I0921 22:22:08.606483  265259 kubeadm.go:317] [addons] Applied essential addon: kube-proxy
	I0921 22:22:08.607838  265259 kubeadm.go:317] 
	I0921 22:22:08.607927  265259 kubeadm.go:317] Your Kubernetes control-plane has initialized successfully!
	I0921 22:22:08.607941  265259 kubeadm.go:317] 
	I0921 22:22:08.608028  265259 kubeadm.go:317] To start using your cluster, you need to run the following as a regular user:
	I0921 22:22:08.608042  265259 kubeadm.go:317] 
	I0921 22:22:08.608070  265259 kubeadm.go:317]   mkdir -p $HOME/.kube
	I0921 22:22:08.608136  265259 kubeadm.go:317]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0921 22:22:08.608199  265259 kubeadm.go:317]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0921 22:22:08.608205  265259 kubeadm.go:317] 
	I0921 22:22:08.608270  265259 kubeadm.go:317] Alternatively, if you are the root user, you can run:
	I0921 22:22:08.608279  265259 kubeadm.go:317] 
	I0921 22:22:08.608333  265259 kubeadm.go:317]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0921 22:22:08.608341  265259 kubeadm.go:317] 
	I0921 22:22:08.608405  265259 kubeadm.go:317] You should now deploy a pod network to the cluster.
	I0921 22:22:08.608491  265259 kubeadm.go:317] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0921 22:22:08.608575  265259 kubeadm.go:317]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0921 22:22:08.608582  265259 kubeadm.go:317] 
	I0921 22:22:08.608682  265259 kubeadm.go:317] You can now join any number of control-plane nodes by copying certificate authorities
	I0921 22:22:08.608771  265259 kubeadm.go:317] and service account keys on each node and then running the following as root:
	I0921 22:22:08.608778  265259 kubeadm.go:317] 
	I0921 22:22:08.608870  265259 kubeadm.go:317]   kubeadm join control-plane.minikube.internal:8443 --token 6acdlb.hwh133k5t8mfdxv9 \
	I0921 22:22:08.608983  265259 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:b419e756666ce2e30bdac31039f21ad7242d26e2b9c03a55c5bc6fe8dcfddcf7 \
	I0921 22:22:08.609009  265259 kubeadm.go:317] 	--control-plane 
	I0921 22:22:08.609016  265259 kubeadm.go:317] 
	I0921 22:22:08.609128  265259 kubeadm.go:317] Then you can join any number of worker nodes by running the following on each as root:
	I0921 22:22:08.609134  265259 kubeadm.go:317] 
	I0921 22:22:08.609197  265259 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8443 --token 6acdlb.hwh133k5t8mfdxv9 \
	I0921 22:22:08.609284  265259 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:b419e756666ce2e30bdac31039f21ad7242d26e2b9c03a55c5bc6fe8dcfddcf7 
	I0921 22:22:08.611756  265259 kubeadm.go:317] W0921 22:22:00.400408    3296 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0921 22:22:08.612043  265259 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1017-gcp\n", err: exit status 1
	I0921 22:22:08.612188  265259 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0921 22:22:08.612219  265259 cni.go:95] Creating CNI manager for ""
	I0921 22:22:08.612229  265259 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0921 22:22:08.614511  265259 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0921 22:22:08.615918  265259 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0921 22:22:08.676246  265259 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.2/kubectl ...
	I0921 22:22:08.676274  265259 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0921 22:22:08.693653  265259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0921 22:22:09.436658  265259 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0921 22:22:09.436794  265259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:22:09.436795  265259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl label nodes minikube.k8s.io/version=v1.27.0 minikube.k8s.io/commit=937c68716dfaac5b5ffa3b6655158d5d3472b8c4 minikube.k8s.io/name=embed-certs-20220921220439-10174 minikube.k8s.io/updated_at=2022_09_21T22_22_09_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:22:09.443334  265259 ops.go:34] apiserver oom_adj: -16
	I0921 22:22:09.528958  265259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:22:10.110988  265259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:22:10.611127  265259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:22:11.110374  265259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:22:11.610630  265259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:22:12.110987  265259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:22:12.611437  265259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:22:13.110661  265259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:22:13.610755  265259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:22:14.111256  265259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:22:14.610806  265259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:22:15.110885  265259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:22:15.610612  265259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:22:16.111276  265259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:22:16.611348  265259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:22:17.110520  265259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:22:17.610377  265259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:22:18.110696  265259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:22:18.610392  265259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:22:19.110996  265259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:22:19.611138  265259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:22:20.111351  265259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:22:20.610608  265259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:22:21.111055  265259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:22:21.611123  265259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:22:21.738997  265259 kubeadm.go:1067] duration metric: took 12.302271715s to wait for elevateKubeSystemPrivileges.
	I0921 22:22:21.739026  265259 kubeadm.go:398] StartCluster complete in 4m36.622037809s
	I0921 22:22:21.739041  265259 settings.go:142] acquiring lock: {Name:mk2e017b0c75e33bad2b3a546d00214b38a21694 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:22:21.739131  265259 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
	I0921 22:22:21.740483  265259 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig: {Name:mkdec5faf1bb182b50a3cd5458b352f38075dc15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:22:22.256205  265259 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "embed-certs-20220921220439-10174" rescaled to 1
	I0921 22:22:22.256273  265259 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0921 22:22:22.260022  265259 out.go:177] * Verifying Kubernetes components...
	I0921 22:22:22.256319  265259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0921 22:22:22.256360  265259 addons.go:412] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0921 22:22:22.256527  265259 config.go:180] Loaded profile config "embed-certs-20220921220439-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.2
	I0921 22:22:22.261883  265259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0921 22:22:22.261936  265259 addons.go:65] Setting dashboard=true in profile "embed-certs-20220921220439-10174"
	I0921 22:22:22.261954  265259 addons.go:65] Setting metrics-server=true in profile "embed-certs-20220921220439-10174"
	I0921 22:22:22.261957  265259 addons.go:65] Setting default-storageclass=true in profile "embed-certs-20220921220439-10174"
	I0921 22:22:22.261939  265259 addons.go:65] Setting storage-provisioner=true in profile "embed-certs-20220921220439-10174"
	I0921 22:22:22.261968  265259 addons.go:153] Setting addon metrics-server=true in "embed-certs-20220921220439-10174"
	W0921 22:22:22.261978  265259 addons.go:162] addon metrics-server should already be in state true
	I0921 22:22:22.261978  265259 addons.go:153] Setting addon storage-provisioner=true in "embed-certs-20220921220439-10174"
	I0921 22:22:22.261965  265259 addons.go:153] Setting addon dashboard=true in "embed-certs-20220921220439-10174"
	W0921 22:22:22.261991  265259 addons.go:162] addon dashboard should already be in state true
	I0921 22:22:22.261979  265259 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-20220921220439-10174"
	I0921 22:22:22.262027  265259 host.go:66] Checking if "embed-certs-20220921220439-10174" exists ...
	I0921 22:22:22.262047  265259 host.go:66] Checking if "embed-certs-20220921220439-10174" exists ...
	W0921 22:22:22.261992  265259 addons.go:162] addon storage-provisioner should already be in state true
	I0921 22:22:22.262152  265259 host.go:66] Checking if "embed-certs-20220921220439-10174" exists ...
	I0921 22:22:22.262334  265259 cli_runner.go:164] Run: docker container inspect embed-certs-20220921220439-10174 --format={{.State.Status}}
	I0921 22:22:22.262530  265259 cli_runner.go:164] Run: docker container inspect embed-certs-20220921220439-10174 --format={{.State.Status}}
	I0921 22:22:22.262581  265259 cli_runner.go:164] Run: docker container inspect embed-certs-20220921220439-10174 --format={{.State.Status}}
	I0921 22:22:22.262607  265259 cli_runner.go:164] Run: docker container inspect embed-certs-20220921220439-10174 --format={{.State.Status}}
	I0921 22:22:22.275321  265259 node_ready.go:35] waiting up to 6m0s for node "embed-certs-20220921220439-10174" to be "Ready" ...
	I0921 22:22:22.305122  265259 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.6.0
	I0921 22:22:22.302980  265259 addons.go:153] Setting addon default-storageclass=true in "embed-certs-20220921220439-10174"
	I0921 22:22:22.308412  265259 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0921 22:22:22.306819  265259 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W0921 22:22:22.306823  265259 addons.go:162] addon default-storageclass should already be in state true
	I0921 22:22:22.310056  265259 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0921 22:22:22.310063  265259 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0921 22:22:22.311661  265259 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0921 22:22:22.311677  265259 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0921 22:22:22.310084  265259 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0921 22:22:22.313452  265259 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0921 22:22:22.313470  265259 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0921 22:22:22.313520  265259 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220439-10174
	I0921 22:22:22.310090  265259 host.go:66] Checking if "embed-certs-20220921220439-10174" exists ...
	I0921 22:22:22.311776  265259 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220439-10174
	I0921 22:22:22.311792  265259 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220439-10174
	I0921 22:22:22.314077  265259 cli_runner.go:164] Run: docker container inspect embed-certs-20220921220439-10174 --format={{.State.Status}}
	I0921 22:22:22.348685  265259 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0921 22:22:22.348718  265259 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0921 22:22:22.348780  265259 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220921220439-10174
	I0921 22:22:22.350631  265259 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49428 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/embed-certs-20220921220439-10174/id_rsa Username:docker}
	I0921 22:22:22.350663  265259 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49428 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/embed-certs-20220921220439-10174/id_rsa Username:docker}
	I0921 22:22:22.355060  265259 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49428 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/embed-certs-20220921220439-10174/id_rsa Username:docker}
	I0921 22:22:22.379378  265259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0921 22:22:22.385678  265259 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49428 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/embed-certs-20220921220439-10174/id_rsa Username:docker}
	I0921 22:22:22.494559  265259 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0921 22:22:22.494597  265259 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0921 22:22:22.494751  265259 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0921 22:22:22.494781  265259 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0921 22:22:22.500109  265259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0921 22:22:22.589944  265259 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0921 22:22:22.589985  265259 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0921 22:22:22.592642  265259 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0921 22:22:22.592670  265259 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0921 22:22:22.598738  265259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0921 22:22:22.686188  265259 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0921 22:22:22.686217  265259 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0921 22:22:22.692562  265259 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0921 22:22:22.692589  265259 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0921 22:22:22.777831  265259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0921 22:22:22.795247  265259 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0921 22:22:22.795282  265259 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4206 bytes)
	I0921 22:22:22.886056  265259 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0921 22:22:22.886085  265259 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0921 22:22:22.987013  265259 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0921 22:22:22.987091  265259 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0921 22:22:23.081859  265259 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0921 22:22:23.081893  265259 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0921 22:22:23.177114  265259 start.go:810] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS
	I0921 22:22:23.181203  265259 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0921 22:22:23.181235  265259 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0921 22:22:23.203322  265259 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0921 22:22:23.203354  265259 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0921 22:22:23.284061  265259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0921 22:22:23.584574  265259 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.084417298s)
	I0921 22:22:23.777502  265259 addons.go:383] Verifying addon metrics-server=true in "embed-certs-20220921220439-10174"
	I0921 22:22:24.109571  265259 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0921 22:22:24.111462  265259 addons.go:414] enableAddons completed in 1.8551066s
	I0921 22:22:24.289051  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:22:26.789101  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:22:29.288966  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:22:31.289419  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:22:33.789632  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:22:36.289225  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:22:38.289442  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:22:40.789366  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:22:43.289515  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:22:45.789266  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:22:47.789660  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:22:50.288988  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:22:52.289299  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:22:54.789136  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:22:56.789849  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:22:59.289439  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:23:01.789349  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:23:03.789567  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:23:06.289658  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:23:08.289818  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:23:10.290273  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:23:12.789081  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:23:14.789291  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:23:16.789892  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:23:18.790028  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:23:21.289295  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:23:23.289408  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:23:25.789350  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:23:27.789995  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:23:30.288831  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:23:32.289573  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:23:34.789272  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:23:36.789372  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:23:38.789452  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:23:41.288941  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:23:43.290284  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:23:45.789282  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:23:47.789698  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:23:50.289450  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:23:52.789754  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:23:55.289930  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:23:57.789456  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:00.289698  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:02.790230  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:05.288966  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:07.290168  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:09.789185  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:12.289080  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:14.289667  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:16.789199  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:18.789856  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:21.289803  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:23.789425  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:25.789684  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:27.790412  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:30.289213  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:32.789335  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:34.789530  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:37.289284  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:39.789967  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:42.289726  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:44.789355  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:47.288847  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:49.289182  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:51.289938  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:53.789719  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:56.289013  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:58.289251  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:00.789124  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:02.789910  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:05.290121  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:07.789553  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:10.289528  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:12.789110  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:14.789413  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:16.789947  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:19.289330  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:21.789335  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:23.789488  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:25.789726  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:28.289323  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:30.789442  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:32.789667  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:34.790488  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:37.288758  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:39.289052  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:41.789424  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:44.289331  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:46.789866  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:49.289362  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:51.789574  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:54.288796  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:56.289801  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:58.789397  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:26:00.789911  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:26:03.289345  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:26:05.789183  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:26:08.289258  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:26:10.789536  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:26:12.790035  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:26:15.290491  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:26:17.789818  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:26:20.289226  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:26:22.292776  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:26:22.292799  265259 node_ready.go:38] duration metric: took 4m0.017444735s waiting for node "embed-certs-20220921220439-10174" to be "Ready" ...
	I0921 22:26:22.294631  265259 out.go:177] 
	W0921 22:26:22.296115  265259 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0921 22:26:22.296143  265259 out.go:239] * 
	* 
	W0921 22:26:22.296927  265259 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0921 22:26:22.298511  265259 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p embed-certs-20220921220439-10174 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220921220439-10174
helpers_test.go:235: (dbg) docker inspect embed-certs-20220921220439-10174:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0efc3a0310481f73b6cd48b12157774b33f818f56057de5f30c303eafdd6c31a",
	        "Created": "2022-09-21T22:04:47.451918435Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 265957,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-09-21T22:17:28.927823098Z",
	            "FinishedAt": "2022-09-21T22:17:27.423983604Z"
	        },
	        "Image": "sha256:5f58fddaff4349397c9f51a6b73926a9b118af22b4ccb4492e84c74d0b59dcd4",
	        "ResolvConfPath": "/var/lib/docker/containers/0efc3a0310481f73b6cd48b12157774b33f818f56057de5f30c303eafdd6c31a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0efc3a0310481f73b6cd48b12157774b33f818f56057de5f30c303eafdd6c31a/hostname",
	        "HostsPath": "/var/lib/docker/containers/0efc3a0310481f73b6cd48b12157774b33f818f56057de5f30c303eafdd6c31a/hosts",
	        "LogPath": "/var/lib/docker/containers/0efc3a0310481f73b6cd48b12157774b33f818f56057de5f30c303eafdd6c31a/0efc3a0310481f73b6cd48b12157774b33f818f56057de5f30c303eafdd6c31a-json.log",
	        "Name": "/embed-certs-20220921220439-10174",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "embed-certs-20220921220439-10174:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-20220921220439-10174",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/4c8238508dd48885d3bcd481ab9754fae874a05a7cd3b8c92540515918045fea-init/diff:/var/lib/docker/overlay2/e464f9a636cb97832b8aa5685163afb26b14d037782877c3831e0f261b6fb12b/diff:/var/lib/docker/overlay2/6f2f230003f0711dfcff3390931e14f9abd39be1cd2b674079cb6a0fc99dab7f/diff:/var/lib/docker/overlay2/a539506727a1471a83b57694b6923a137699d9d47fe00601ffa27c34d63d17c2/diff:/var/lib/docker/overlay2/7dc48fda616494e9959c05191521a1f5bdf6be0555a35fc1d7ce85b87a0522ad/diff:/var/lib/docker/overlay2/6a1c06da85f8c03c9712693ca8a75d804e7b9f135a6b92e3929385cc791b0b3a/diff:/var/lib/docker/overlay2/d178ee6cf0c027c7943422dc71b38eb93ee131daede423f6cf283424eaebc517/diff:/var/lib/docker/overlay2/f4c8d81a89934a29db99d9ed4b4542a8882465ab2b6419ba4a3fe09214d25d9e/diff:/var/lib/docker/overlay2/4411575e37772974371f716b7c4abb3053138ddcaf53d883eb3116b4c3a37f78/diff:/var/lib/docker/overlay2/a3f8db0c0d790efcede2f41e470d293e280b022ff0bbb46c8bc25f190e3146fc/diff:/var/lib/docker/overlay2/a5bfd3
703378d6d5b2622598a13b4b2eb9203659c1ffff9aa93944ef8d20d22c/diff:/var/lib/docker/overlay2/3c24c9c8a37c1a488d12209df991b3857ccbc448a3329e46a8d48fc28ef81b83/diff:/var/lib/docker/overlay2/199771963b33bc809e912a6d428c556897f0ba04db2268e08695cb7ce0ee3bad/diff:/var/lib/docker/overlay2/4bc13570e456a5d261fbaf68140f2e5b2ea10c6683623858e16ee3ed1b117ef7/diff:/var/lib/docker/overlay2/9831fdfd44eff7e3f4780d10f5480f090b388735bb188e7a1e93251b7769d8f3/diff:/var/lib/docker/overlay2/28126fe31ddfbf99d4588c5ac750503b9b6cee8f2e00781941cff5f4323d1c76/diff:/var/lib/docker/overlay2/173221e8ff5f6ec22870429dc8bb01272ade638fe47275d1daa1ce07952c8c8c/diff:/var/lib/docker/overlay2/53bc2e4fa25c7c694765cf4d57e59552b26eed732ba59b8d74b221ea0ada7479/diff:/var/lib/docker/overlay2/0175a1cb7e1c92247b6cbac270da49d811d3508e193c1c2e47bef8cb769a47bf/diff:/var/lib/docker/overlay2/1651afeb98cf83af553f3e0a4dcb564af84cb440147702fddc0133cdae08e879/diff:/var/lib/docker/overlay2/64c0563be8f8902fe0e9d207f839be98da0e23d6d839403a7e5498f0468e6b75/diff:/var/lib/d
ocker/overlay2/c9ce1f7c22e681e0e279ce7656144b4c75e2d24f7297acad69e621f2756b75e8/diff:/var/lib/docker/overlay2/469e6b8bbc078383af3dc64c254906774ab56a833c4de1dd39b46bfa27fee3b0/diff:/var/lib/docker/overlay2/797536b923ada4261904843a51188063c401089eae5fec971f1dfb3c21d000a8/diff:/var/lib/docker/overlay2/9d5cbab97d75347795fc5814806362514e12ffc998b48411ecf1d23f980badf6/diff:/var/lib/docker/overlay2/8681f41e3df379cfdc8fd52b2e5837512b8e36a64c87fb159548a876d16e691d/diff:/var/lib/docker/overlay2/78ea1917a0aca968f65d796cbc2ee95d7d190a700496b9e47b29884fe13b1bec/diff:/var/lib/docker/overlay2/c3c231a74b7992f1fdc04b623c10d7791eab4fd97c803ed2610d595097c682c2/diff:/var/lib/docker/overlay2/f7b38a2a87d3958318f3a4b244e33d8096cd001a8fcb916eec022b0679382900/diff:/var/lib/docker/overlay2/1107be3397c1dfc460decaf307982d427cc431beb7938fb0e78f4f7cbc0afd3b/diff:/var/lib/docker/overlay2/59319d0c4cc381deff6e51ba1cc718b7cfd4fc557e74b8e83bd228afca501c8c/diff:/var/lib/docker/overlay2/9d031408a1ab9825246b7bba81db2596cb92e70fbedc70fba9be1d25320
8529f/diff:/var/lib/docker/overlay2/5800b717c814431baf3cc17520b099e7e011915cdeb46ed6360ec2f831a926ec/diff:/var/lib/docker/overlay2/6660d8dfc6dbb7f41f871ff7ec7e0fdfdb2ca3480141cdd4cce92869cb5382bf/diff:/var/lib/docker/overlay2/525a566b79110ff18e78880f2652b8d9cfcfbdbf3272fedb650933fcbbf869bd/diff:/var/lib/docker/overlay2/a0730aa8e0952fc661048d3a7f986ff246a5b2a29ad4e8195970bb407e3cecfc/diff:/var/lib/docker/overlay2/f9d25765ae914f5f013b5f46292f03c39d82f675d2102904f5fabecf07189337/diff:/var/lib/docker/overlay2/b7b34d126964ff60bc0fb625da7343e53e8bb4e77d5e72e33257c28b3d395ca3/diff:/var/lib/docker/overlay2/8514d6cc3775ebdd86f683a78503419bc97346c0059ac0d48563d2edec9727f0/diff:/var/lib/docker/overlay2/dd6f7bead5718d42040b4ee90ce09fa15720d04f3c58e2964b1d5b241bf5b7ed/diff:/var/lib/docker/overlay2/1a8adeb0355a42e9284e241b527930f4f6b7108d459c10550d5d0c4451f9e924/diff:/var/lib/docker/overlay2/703991449e0070d93945e4435da56144ce54e461e877e0b084144663d46b10f6/diff:/var/lib/docker/overlay2/65855711cfa445d374537e6268fd1bea2f62ea
bf2f590842933254b4755050dd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4c8238508dd48885d3bcd481ab9754fae874a05a7cd3b8c92540515918045fea/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4c8238508dd48885d3bcd481ab9754fae874a05a7cd3b8c92540515918045fea/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4c8238508dd48885d3bcd481ab9754fae874a05a7cd3b8c92540515918045fea/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-20220921220439-10174",
	                "Source": "/var/lib/docker/volumes/embed-certs-20220921220439-10174/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-20220921220439-10174",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-20220921220439-10174",
	                "name.minikube.sigs.k8s.io": "embed-certs-20220921220439-10174",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "bfb659b902e30decb66fbff7256dc4eff717f7e3540c5368b0dbaf96e0b6ac1c",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49428"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49427"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49424"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49426"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49425"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/bfb659b902e3",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-20220921220439-10174": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "0efc3a031048",
	                        "embed-certs-20220921220439-10174"
	                    ],
	                    "NetworkID": "e71aa30fd3ace87130e43e4abce1f2566d43d95c3b2e37ab1594e3c5a105c1bc",
	                    "EndpointID": "aaa77ea547f85d026152cafd14deb1d062a93066c3408701210f6a40b1b21fac",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220921220439-10174 -n embed-certs-20220921220439-10174
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-20220921220439-10174 logs -n 25
helpers_test.go:252: TestStartStop/group/embed-certs/serial/SecondStart logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| Command |                            Args                            |                     Profile                     |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| unpause | -p                                                         | old-k8s-version-20220921220722-10174            | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | old-k8s-version-20220921220722-10174                       |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | old-k8s-version-20220921220722-10174            | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | old-k8s-version-20220921220722-10174                       |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | old-k8s-version-20220921220722-10174            | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | old-k8s-version-20220921220722-10174                       |                                                 |         |         |                     |                     |
	| start   | -p newest-cni-20220921221720-10174 --memory=2200           | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.2                               |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | embed-certs-20220921220439-10174                | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | embed-certs-20220921220439-10174                           |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                     |                     |
	| stop    | -p                                                         | embed-certs-20220921220439-10174                | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | embed-certs-20220921220439-10174                           |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | embed-certs-20220921220439-10174                | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | embed-certs-20220921220439-10174                           |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                     |                     |
	| start   | -p                                                         | embed-certs-20220921220439-10174                | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC |                     |
	|         | embed-certs-20220921220439-10174                           |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                     |                     |
	|         | --wait=true --embed-certs                                  |                                                 |         |         |                     |                     |
	|         | --driver=docker                                            |                                                 |         |         |                     |                     |
	|         | --container-runtime=containerd                             |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.2                               |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | newest-cni-20220921221720-10174                            |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                     |                     |
	| stop    | -p                                                         | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | newest-cni-20220921221720-10174                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | newest-cni-20220921221720-10174                            |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                     |                     |
	| start   | -p newest-cni-20220921221720-10174 --memory=2200           | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:18 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.2                               |                                                 |         |         |                     |                     |
	| ssh     | -p                                                         | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:18 UTC | 21 Sep 22 22:18 UTC |
	|         | newest-cni-20220921221720-10174                            |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                     |                     |
	| pause   | -p                                                         | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:18 UTC | 21 Sep 22 22:18 UTC |
	|         | newest-cni-20220921221720-10174                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| unpause | -p                                                         | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:18 UTC | 21 Sep 22 22:18 UTC |
	|         | newest-cni-20220921221720-10174                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:18 UTC | 21 Sep 22 22:18 UTC |
	|         | newest-cni-20220921221720-10174                            |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:18 UTC | 21 Sep 22 22:18 UTC |
	|         | newest-cni-20220921221720-10174                            |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | no-preload-20220921220832-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:21 UTC | 21 Sep 22 22:21 UTC |
	|         | no-preload-20220921220832-10174                            |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                     |                     |
	| stop    | -p                                                         | no-preload-20220921220832-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:21 UTC | 21 Sep 22 22:21 UTC |
	|         | no-preload-20220921220832-10174                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | no-preload-20220921220832-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:21 UTC | 21 Sep 22 22:21 UTC |
	|         | no-preload-20220921220832-10174                            |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                     |                     |
	| start   | -p                                                         | no-preload-20220921220832-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:21 UTC |                     |
	|         | no-preload-20220921220832-10174                            |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                     |                     |
	|         | --wait=true --preload=false                                |                                                 |         |         |                     |                     |
	|         | --driver=docker                                            |                                                 |         |         |                     |                     |
	|         | --container-runtime=containerd                             |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.2                               |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | default-k8s-different-port-20220921221118-10174 | jenkins | v1.27.0 | 21 Sep 22 22:23 UTC | 21 Sep 22 22:23 UTC |
	|         | default-k8s-different-port-20220921221118-10174            |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                     |                     |
	| stop    | -p                                                         | default-k8s-different-port-20220921221118-10174 | jenkins | v1.27.0 | 21 Sep 22 22:23 UTC | 21 Sep 22 22:24 UTC |
	|         | default-k8s-different-port-20220921221118-10174            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | default-k8s-different-port-20220921221118-10174 | jenkins | v1.27.0 | 21 Sep 22 22:24 UTC | 21 Sep 22 22:24 UTC |
	|         | default-k8s-different-port-20220921221118-10174            |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                     |                     |
	| start   | -p                                                         | default-k8s-different-port-20220921221118-10174 | jenkins | v1.27.0 | 21 Sep 22 22:24 UTC |                     |
	|         | default-k8s-different-port-20220921221118-10174            |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true                |                                                 |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker                      |                                                 |         |         |                     |                     |
	|         |  --container-runtime=containerd                            |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.2                               |                                                 |         |         |                     |                     |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/09/21 22:24:01
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.19.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0921 22:24:01.692796  283599 out.go:296] Setting OutFile to fd 1 ...
	I0921 22:24:01.693211  283599 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:24:01.693232  283599 out.go:309] Setting ErrFile to fd 2...
	I0921 22:24:01.693240  283599 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:24:01.693504  283599 root.go:333] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/bin
	I0921 22:24:01.694665  283599 out.go:303] Setting JSON to false
	I0921 22:24:01.696140  283599 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3993,"bootTime":1663795049,"procs":467,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1017-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0921 22:24:01.696247  283599 start.go:125] virtualization: kvm guest
	I0921 22:24:01.698874  283599 out.go:177] * [default-k8s-different-port-20220921221118-10174] minikube v1.27.0 on Ubuntu 20.04 (kvm/amd64)
	I0921 22:24:01.701214  283599 out.go:177]   - MINIKUBE_LOCATION=14995
	I0921 22:24:01.701128  283599 notify.go:214] Checking for updates...
	I0921 22:24:01.703092  283599 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0921 22:24:01.704791  283599 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
	I0921 22:24:01.706544  283599 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube
	I0921 22:24:01.708172  283599 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0921 22:23:57.318050  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:23:59.318317  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:01.710349  283599 config.go:180] Loaded profile config "default-k8s-different-port-20220921221118-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.2
	I0921 22:24:01.710930  283599 driver.go:365] Setting default libvirt URI to qemu:///system
	I0921 22:24:01.744026  283599 docker.go:137] docker version: linux-20.10.18
	I0921 22:24:01.744136  283599 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:24:01.840732  283599 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:true NGoroutines:44 SystemTime:2022-09-21 22:24:01.764457724 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1017-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.18 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0921 22:24:01.840851  283599 docker.go:254] overlay module found
	I0921 22:24:01.843051  283599 out.go:177] * Using the docker driver based on existing profile
	I0921 22:24:01.844347  283599 start.go:284] selected driver: docker
	I0921 22:24:01.844371  283599 start.go:808] validating driver "docker" against &{Name:default-k8s-different-port-20220921221118-10174 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:default-k8s-different-port-20220921221118-10174 Na
mespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountSt
ring:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 22:24:01.844475  283599 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0921 22:24:01.845300  283599 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:24:01.940944  283599 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:true NGoroutines:44 SystemTime:2022-09-21 22:24:01.86716064 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1017-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.18 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientI
nfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0921 22:24:01.941199  283599 start_flags.go:867] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0921 22:24:01.941223  283599 cni.go:95] Creating CNI manager for ""
	I0921 22:24:01.941231  283599 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0921 22:24:01.941249  283599 start_flags.go:316] config:
	{Name:default-k8s-different-port-20220921221118-10174 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:default-k8s-different-port-20220921221118-10174 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDo
main:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 22:24:01.944240  283599 out.go:177] * Starting control plane node default-k8s-different-port-20220921221118-10174 in cluster default-k8s-different-port-20220921221118-10174
	I0921 22:24:01.945596  283599 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0921 22:24:01.946905  283599 out.go:177] * Pulling base image ...
	I0921 22:24:01.948255  283599 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime containerd
	I0921 22:24:01.948306  283599 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.2-containerd-overlay2-amd64.tar.lz4
	I0921 22:24:01.948321  283599 cache.go:57] Caching tarball of preloaded images
	I0921 22:24:01.948361  283599 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	I0921 22:24:01.948572  283599 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0921 22:24:01.948588  283599 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.2 on containerd
	I0921 22:24:01.948702  283599 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/config.json ...
	I0921 22:24:01.976413  283599 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon, skipping pull
	I0921 22:24:01.976445  283599 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c exists in daemon, skipping load
	I0921 22:24:01.976457  283599 cache.go:208] Successfully downloaded all kic artifacts
	I0921 22:24:01.976502  283599 start.go:364] acquiring machines lock for default-k8s-different-port-20220921221118-10174: {Name:mk6a2906d520bc1db61074ef435cf249d094e940 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:24:01.976622  283599 start.go:368] acquired machines lock for "default-k8s-different-port-20220921221118-10174" in 78.111µs
	I0921 22:24:01.976652  283599 start.go:96] Skipping create...Using existing machine configuration
	I0921 22:24:01.976660  283599 fix.go:55] fixHost starting: 
	I0921 22:24:01.976899  283599 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221118-10174 --format={{.State.Status}}
	I0921 22:24:02.002084  283599 fix.go:103] recreateIfNeeded on default-k8s-different-port-20220921221118-10174: state=Stopped err=<nil>
	W0921 22:24:02.002122  283599 fix.go:129] unexpected machine state, will restart: <nil>
	I0921 22:24:02.004632  283599 out.go:177] * Restarting existing docker container for "default-k8s-different-port-20220921221118-10174" ...
	I0921 22:24:00.289698  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:02.790230  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:02.006307  283599 cli_runner.go:164] Run: docker start default-k8s-different-port-20220921221118-10174
	I0921 22:24:02.358108  283599 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221118-10174 --format={{.State.Status}}
	I0921 22:24:02.385298  283599 kic.go:415] container "default-k8s-different-port-20220921221118-10174" state is running.
	I0921 22:24:02.385684  283599 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220921221118-10174
	I0921 22:24:02.412757  283599 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/config.json ...
	I0921 22:24:02.412997  283599 machine.go:88] provisioning docker machine ...
	I0921 22:24:02.413031  283599 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20220921221118-10174"
	I0921 22:24:02.413108  283599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:24:02.438229  283599 main.go:134] libmachine: Using SSH client type: native
	I0921 22:24:02.438400  283599 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ec520] 0x7ef6a0 <nil>  [] 0s} 127.0.0.1 49443 <nil> <nil>}
	I0921 22:24:02.438416  283599 main.go:134] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20220921221118-10174 && echo "default-k8s-different-port-20220921221118-10174" | sudo tee /etc/hostname
	I0921 22:24:02.439038  283599 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34230->127.0.0.1:49443: read: connection reset by peer
	I0921 22:24:05.584682  283599 main.go:134] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20220921221118-10174
	
	I0921 22:24:05.584766  283599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:24:05.608825  283599 main.go:134] libmachine: Using SSH client type: native
	I0921 22:24:05.609026  283599 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ec520] 0x7ef6a0 <nil>  [] 0s} 127.0.0.1 49443 <nil> <nil>}
	I0921 22:24:05.609059  283599 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20220921221118-10174' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20220921221118-10174/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20220921221118-10174' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0921 22:24:05.739656  283599 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0921 22:24:05.739694  283599 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube}
	I0921 22:24:05.739749  283599 ubuntu.go:177] setting up certificates
	I0921 22:24:05.739765  283599 provision.go:83] configureAuth start
	I0921 22:24:05.739824  283599 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220921221118-10174
	I0921 22:24:05.764789  283599 provision.go:138] copyHostCerts
	I0921 22:24:05.764839  283599 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.pem, removing ...
	I0921 22:24:05.764846  283599 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.pem
	I0921 22:24:05.764904  283599 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.pem (1078 bytes)
	I0921 22:24:05.764993  283599 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cert.pem, removing ...
	I0921 22:24:05.765005  283599 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cert.pem
	I0921 22:24:05.765028  283599 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cert.pem (1123 bytes)
	I0921 22:24:05.765086  283599 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/key.pem, removing ...
	I0921 22:24:05.765095  283599 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/key.pem
	I0921 22:24:05.765118  283599 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/key.pem (1675 bytes)
	I0921 22:24:05.765169  283599 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20220921221118-10174 san=[192.168.85.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20220921221118-10174]
	I0921 22:24:05.914466  283599 provision.go:172] copyRemoteCerts
	I0921 22:24:05.914534  283599 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0921 22:24:05.914564  283599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:24:05.939805  283599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49443 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa Username:docker}
	I0921 22:24:06.031315  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0921 22:24:06.048618  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server.pem --> /etc/docker/server.pem (1310 bytes)
	I0921 22:24:06.065530  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0921 22:24:06.083800  283599 provision.go:86] duration metric: configureAuth took 344.021748ms
	I0921 22:24:06.083828  283599 ubuntu.go:193] setting minikube options for container-runtime
	I0921 22:24:06.083988  283599 config.go:180] Loaded profile config "default-k8s-different-port-20220921221118-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.2
	I0921 22:24:06.083999  283599 machine.go:91] provisioned docker machine in 3.670987023s
	I0921 22:24:06.084006  283599 start.go:300] post-start starting for "default-k8s-different-port-20220921221118-10174" (driver="docker")
	I0921 22:24:06.084012  283599 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0921 22:24:06.084049  283599 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0921 22:24:06.084088  283599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:24:06.108286  283599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49443 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa Username:docker}
	I0921 22:24:06.203139  283599 ssh_runner.go:195] Run: cat /etc/os-release
	I0921 22:24:06.205811  283599 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0921 22:24:06.205839  283599 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0921 22:24:06.205852  283599 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0921 22:24:06.205864  283599 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0921 22:24:06.205880  283599 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/addons for local assets ...
	I0921 22:24:06.205944  283599 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files for local assets ...
	I0921 22:24:06.206037  283599 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/101742.pem -> 101742.pem in /etc/ssl/certs
	I0921 22:24:06.206142  283599 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0921 22:24:06.212569  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/101742.pem --> /etc/ssl/certs/101742.pem (1708 bytes)
	I0921 22:24:06.229418  283599 start.go:303] post-start completed in 145.398445ms
	I0921 22:24:06.229483  283599 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:24:06.229517  283599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:24:06.253305  283599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49443 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa Username:docker}
	I0921 22:24:06.340119  283599 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:24:06.344050  283599 fix.go:57] fixHost completed within 4.367385464s
	I0921 22:24:06.344071  283599 start.go:83] releasing machines lock for "default-k8s-different-port-20220921221118-10174", held for 4.367430848s
	I0921 22:24:06.344157  283599 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220921221118-10174
	I0921 22:24:06.368445  283599 ssh_runner.go:195] Run: systemctl --version
	I0921 22:24:06.368501  283599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:24:06.368505  283599 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0921 22:24:06.368550  283599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:24:06.394444  283599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49443 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa Username:docker}
	I0921 22:24:06.396066  283599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49443 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa Username:docker}
	I0921 22:24:06.513229  283599 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0921 22:24:06.524587  283599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0921 22:24:06.533746  283599 docker.go:188] disabling docker service ...
	I0921 22:24:06.533795  283599 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0921 22:24:06.543075  283599 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0921 22:24:06.551813  283599 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0921 22:24:06.629483  283599 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0921 22:24:01.818416  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:04.317912  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:05.288966  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:07.290168  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:06.707030  283599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0921 22:24:06.717244  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0921 22:24:06.729638  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "registry.k8s.io/pause:3.8"|' -i /etc/containerd/config.toml"
	I0921 22:24:06.737194  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0921 22:24:06.744928  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0921 22:24:06.752650  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0921 22:24:06.760419  283599 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0921 22:24:06.766584  283599 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0921 22:24:06.772903  283599 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0921 22:24:06.844578  283599 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0921 22:24:06.917291  283599 start.go:450] Will wait 60s for socket path /run/containerd/containerd.sock
	I0921 22:24:06.917353  283599 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0921 22:24:06.921118  283599 start.go:471] Will wait 60s for crictl version
	I0921 22:24:06.921184  283599 ssh_runner.go:195] Run: sudo crictl version
	I0921 22:24:06.948257  283599 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-09-21T22:24:06Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0921 22:24:06.817672  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:09.317278  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:11.317829  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:09.789185  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:12.289080  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:13.817410  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:15.817496  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:17.995620  283599 ssh_runner.go:195] Run: sudo crictl version
	I0921 22:24:18.018705  283599 start.go:480] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.8
	RuntimeApiVersion:  v1alpha2
	I0921 22:24:18.018768  283599 ssh_runner.go:195] Run: containerd --version
	I0921 22:24:18.047337  283599 ssh_runner.go:195] Run: containerd --version
	I0921 22:24:18.078051  283599 out.go:177] * Preparing Kubernetes v1.25.2 on containerd 1.6.8 ...
	I0921 22:24:14.289667  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:16.789199  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:18.079491  283599 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220921221118-10174 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 22:24:18.103308  283599 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0921 22:24:18.106553  283599 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0921 22:24:18.115993  283599 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime containerd
	I0921 22:24:18.116056  283599 ssh_runner.go:195] Run: sudo crictl images --output json
	I0921 22:24:18.139896  283599 containerd.go:553] all images are preloaded for containerd runtime.
	I0921 22:24:18.139921  283599 containerd.go:467] Images already preloaded, skipping extraction
	I0921 22:24:18.139964  283599 ssh_runner.go:195] Run: sudo crictl images --output json
	I0921 22:24:18.163323  283599 containerd.go:553] all images are preloaded for containerd runtime.
	I0921 22:24:18.163344  283599 cache_images.go:84] Images are preloaded, skipping loading
	I0921 22:24:18.163382  283599 ssh_runner.go:195] Run: sudo crictl info
	I0921 22:24:18.186911  283599 cni.go:95] Creating CNI manager for ""
	I0921 22:24:18.186935  283599 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0921 22:24:18.186948  283599 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0921 22:24:18.186961  283599 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.25.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20220921221118-10174 NodeName:default-k8s-different-port-20220921221118-10174 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgr
oupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
	I0921 22:24:18.187074  283599 kubeadm.go:161] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "default-k8s-different-port-20220921221118-10174"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0921 22:24:18.187152  283599 kubeadm.go:962] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=default-k8s-different-port-20220921221118-10174 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.2 ClusterName:default-k8s-different-port-20220921221118-10174 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0921 22:24:18.187196  283599 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.2
	I0921 22:24:18.194012  283599 binaries.go:44] Found k8s binaries, skipping transfer
	I0921 22:24:18.194081  283599 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0921 22:24:18.200606  283599 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (540 bytes)
	I0921 22:24:18.212899  283599 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0921 22:24:18.224754  283599 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2076 bytes)
	I0921 22:24:18.236775  283599 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0921 22:24:18.239439  283599 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0921 22:24:18.248263  283599 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174 for IP: 192.168.85.2
	I0921 22:24:18.248377  283599 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.key
	I0921 22:24:18.248421  283599 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/proxy-client-ca.key
	I0921 22:24:18.248485  283599 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/client.key
	I0921 22:24:18.248538  283599 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/apiserver.key.43b9df8c
	I0921 22:24:18.248575  283599 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/proxy-client.key
	I0921 22:24:18.248658  283599 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/10174.pem (1338 bytes)
	W0921 22:24:18.248689  283599 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/10174_empty.pem, impossibly tiny 0 bytes
	I0921 22:24:18.248705  283599 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca-key.pem (1679 bytes)
	I0921 22:24:18.248729  283599 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem (1078 bytes)
	I0921 22:24:18.248758  283599 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/cert.pem (1123 bytes)
	I0921 22:24:18.248780  283599 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/key.pem (1675 bytes)
	I0921 22:24:18.248846  283599 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/101742.pem (1708 bytes)
	I0921 22:24:18.249439  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0921 22:24:18.265894  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0921 22:24:18.282128  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0921 22:24:18.298690  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0921 22:24:18.315323  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0921 22:24:18.331842  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0921 22:24:18.348196  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0921 22:24:18.364368  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0921 22:24:18.380401  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0921 22:24:18.396696  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/10174.pem --> /usr/share/ca-certificates/10174.pem (1338 bytes)
	I0921 22:24:18.413238  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/101742.pem --> /usr/share/ca-certificates/101742.pem (1708 bytes)
	I0921 22:24:18.429482  283599 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0921 22:24:18.441654  283599 ssh_runner.go:195] Run: openssl version
	I0921 22:24:18.446184  283599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0921 22:24:18.453215  283599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0921 22:24:18.456119  283599 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Sep 21 21:28 /usr/share/ca-certificates/minikubeCA.pem
	I0921 22:24:18.456166  283599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0921 22:24:18.460690  283599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0921 22:24:18.467196  283599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10174.pem && ln -fs /usr/share/ca-certificates/10174.pem /etc/ssl/certs/10174.pem"
	I0921 22:24:18.474449  283599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10174.pem
	I0921 22:24:18.477401  283599 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Sep 21 21:32 /usr/share/ca-certificates/10174.pem
	I0921 22:24:18.477445  283599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10174.pem
	I0921 22:24:18.481956  283599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10174.pem /etc/ssl/certs/51391683.0"
	I0921 22:24:18.488418  283599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/101742.pem && ln -fs /usr/share/ca-certificates/101742.pem /etc/ssl/certs/101742.pem"
	I0921 22:24:18.495604  283599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/101742.pem
	I0921 22:24:18.498556  283599 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Sep 21 21:32 /usr/share/ca-certificates/101742.pem
	I0921 22:24:18.498600  283599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/101742.pem
	I0921 22:24:18.503245  283599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/101742.pem /etc/ssl/certs/3ec20f2e.0"
	I0921 22:24:18.509856  283599 kubeadm.go:396] StartCluster: {Name:default-k8s-different-port-20220921221118-10174 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:default-k8s-different-port-20220921221118-10174 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/
minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 22:24:18.509953  283599 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0921 22:24:18.509985  283599 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0921 22:24:18.533346  283599 cri.go:87] found id: "1058c41aafbb87ce5fc70eca7e064de03a01eb48e2da027b4515730997b03de5"
	I0921 22:24:18.533375  283599 cri.go:87] found id: "e1b3d54125fe2aff15e0a996ae0be62597db4065ff9a9d41a3b0e598b97b1608"
	I0921 22:24:18.533382  283599 cri.go:87] found id: "2654f64b12dee801006bf0c4b82dc6839422d58571d4bc74cee43fe7fdfe9b01"
	I0921 22:24:18.533388  283599 cri.go:87] found id: "1bb50adc50b7e339b9e7fee763126f6f17a621dcc78637bab78187b50e68bbf2"
	I0921 22:24:18.533393  283599 cri.go:87] found id: "9e931b83ea689a63f7a3861c1e0f7076f722e68890221c6209b05b3b852c12d7"
	I0921 22:24:18.533402  283599 cri.go:87] found id: "8a3d458869b0951d2fe3dd93e9d05a549b058f95f84b2ad0685292ebb3428767"
	I0921 22:24:18.533407  283599 cri.go:87] found id: ""
	I0921 22:24:18.533444  283599 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0921 22:24:18.545553  283599 cri.go:114] JSON = null
	W0921 22:24:18.545605  283599 kubeadm.go:403] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I0921 22:24:18.545686  283599 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0921 22:24:18.552635  283599 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I0921 22:24:18.552664  283599 kubeadm.go:627] restartCluster start
	I0921 22:24:18.552705  283599 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0921 22:24:18.558944  283599 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:18.559817  283599 kubeconfig.go:116] verify returned: extract IP: "default-k8s-different-port-20220921221118-10174" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
	I0921 22:24:18.560296  283599 kubeconfig.go:127] "default-k8s-different-port-20220921221118-10174" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig - will repair!
	I0921 22:24:18.561146  283599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig: {Name:mkdec5faf1bb182b50a3cd5458b352f38075dc15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:24:18.562655  283599 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0921 22:24:18.568841  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:18.568884  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:18.576584  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:18.776932  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:18.777023  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:18.786228  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:18.977461  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:18.977542  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:18.986186  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:19.177398  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:19.177487  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:19.186159  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:19.377453  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:19.377534  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:19.385921  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:19.577206  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:19.577296  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:19.586370  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:19.777572  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:19.777676  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:19.786797  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:19.977103  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:19.977188  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:19.985822  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:20.177132  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:20.177234  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:20.185876  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:20.377187  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:20.377298  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:20.386086  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:20.577399  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:20.577488  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:20.586142  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:20.777447  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:20.777527  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:20.786547  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:20.976769  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:20.976865  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:20.985682  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:21.176870  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:21.176951  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:21.185811  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:21.377116  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:21.377184  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:21.385829  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:21.577109  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:21.577202  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:21.585911  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:21.585933  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:21.585979  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:21.593866  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:21.593893  283599 kubeadm.go:602] needs reconfigure: apiserver error: timed out waiting for the condition
	I0921 22:24:21.593899  283599 kubeadm.go:1114] stopping kube-system containers ...
	I0921 22:24:21.593908  283599 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0921 22:24:21.593964  283599 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0921 22:24:21.618017  283599 cri.go:87] found id: "1058c41aafbb87ce5fc70eca7e064de03a01eb48e2da027b4515730997b03de5"
	I0921 22:24:21.618041  283599 cri.go:87] found id: "e1b3d54125fe2aff15e0a996ae0be62597db4065ff9a9d41a3b0e598b97b1608"
	I0921 22:24:21.618048  283599 cri.go:87] found id: "2654f64b12dee801006bf0c4b82dc6839422d58571d4bc74cee43fe7fdfe9b01"
	I0921 22:24:21.618058  283599 cri.go:87] found id: "1bb50adc50b7e339b9e7fee763126f6f17a621dcc78637bab78187b50e68bbf2"
	I0921 22:24:21.618064  283599 cri.go:87] found id: "9e931b83ea689a63f7a3861c1e0f7076f722e68890221c6209b05b3b852c12d7"
	I0921 22:24:21.618072  283599 cri.go:87] found id: "8a3d458869b0951d2fe3dd93e9d05a549b058f95f84b2ad0685292ebb3428767"
	I0921 22:24:21.618078  283599 cri.go:87] found id: ""
	I0921 22:24:21.618082  283599 cri.go:232] Stopping containers: [1058c41aafbb87ce5fc70eca7e064de03a01eb48e2da027b4515730997b03de5 e1b3d54125fe2aff15e0a996ae0be62597db4065ff9a9d41a3b0e598b97b1608 2654f64b12dee801006bf0c4b82dc6839422d58571d4bc74cee43fe7fdfe9b01 1bb50adc50b7e339b9e7fee763126f6f17a621dcc78637bab78187b50e68bbf2 9e931b83ea689a63f7a3861c1e0f7076f722e68890221c6209b05b3b852c12d7 8a3d458869b0951d2fe3dd93e9d05a549b058f95f84b2ad0685292ebb3428767]
	I0921 22:24:21.618118  283599 ssh_runner.go:195] Run: which crictl
	I0921 22:24:21.621347  283599 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop 1058c41aafbb87ce5fc70eca7e064de03a01eb48e2da027b4515730997b03de5 e1b3d54125fe2aff15e0a996ae0be62597db4065ff9a9d41a3b0e598b97b1608 2654f64b12dee801006bf0c4b82dc6839422d58571d4bc74cee43fe7fdfe9b01 1bb50adc50b7e339b9e7fee763126f6f17a621dcc78637bab78187b50e68bbf2 9e931b83ea689a63f7a3861c1e0f7076f722e68890221c6209b05b3b852c12d7 8a3d458869b0951d2fe3dd93e9d05a549b058f95f84b2ad0685292ebb3428767
	I0921 22:24:21.645531  283599 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0921 22:24:21.655622  283599 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0921 22:24:21.662408  283599 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Sep 21 22:11 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Sep 21 22:11 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2127 Sep 21 22:11 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Sep 21 22:11 /etc/kubernetes/scheduler.conf
	
	I0921 22:24:21.662459  283599 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0921 22:24:21.669029  283599 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0921 22:24:21.675699  283599 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0921 22:24:21.682316  283599 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:21.682358  283599 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0921 22:24:21.688501  283599 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0921 22:24:17.817867  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:19.818111  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:18.789856  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:21.289803  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:21.694928  283599 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:21.696684  283599 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0921 22:24:21.703329  283599 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0921 22:24:21.710109  283599 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0921 22:24:21.710132  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0921 22:24:21.757457  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0921 22:24:22.810948  283599 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.053458682s)
	I0921 22:24:22.810976  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0921 22:24:22.943243  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0921 22:24:22.995873  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0921 22:24:23.097694  283599 api_server.go:51] waiting for apiserver process to appear ...
	I0921 22:24:23.097766  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 22:24:23.608210  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 22:24:24.107567  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 22:24:24.187217  283599 api_server.go:71] duration metric: took 1.089523123s to wait for apiserver process to appear ...
	I0921 22:24:24.187296  283599 api_server.go:87] waiting for apiserver healthz status ...
	I0921 22:24:24.187323  283599 api_server.go:240] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I0921 22:24:24.187688  283599 api_server.go:256] stopped: https://192.168.85.2:8444/healthz: Get "https://192.168.85.2:8444/healthz": dial tcp 192.168.85.2:8444: connect: connection refused
	I0921 22:24:24.688449  283599 api_server.go:240] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I0921 22:24:22.317667  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:24.317872  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:23.789425  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:25.789684  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:27.790412  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:27.592182  283599 api_server.go:266] https://192.168.85.2:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0921 22:24:27.592315  283599 api_server.go:102] status: https://192.168.85.2:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0921 22:24:27.688579  283599 api_server.go:240] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I0921 22:24:27.694601  283599 api_server.go:266] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0921 22:24:27.694667  283599 api_server.go:102] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0921 22:24:28.187832  283599 api_server.go:240] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I0921 22:24:28.192979  283599 api_server.go:266] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0921 22:24:28.193004  283599 api_server.go:102] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0921 22:24:28.688623  283599 api_server.go:240] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I0921 22:24:28.695172  283599 api_server.go:266] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0921 22:24:28.695285  283599 api_server.go:102] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0921 22:24:29.187841  283599 api_server.go:240] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I0921 22:24:29.193157  283599 api_server.go:266] https://192.168.85.2:8444/healthz returned 200:
	ok
	I0921 22:24:29.198775  283599 api_server.go:140] control plane version: v1.25.2
	I0921 22:24:29.198796  283599 api_server.go:130] duration metric: took 5.011488882s to wait for apiserver health ...
	I0921 22:24:29.198805  283599 cni.go:95] Creating CNI manager for ""
	I0921 22:24:29.198812  283599 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0921 22:24:29.201314  283599 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0921 22:24:29.202798  283599 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0921 22:24:29.206616  283599 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.2/kubectl ...
	I0921 22:24:29.206636  283599 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0921 22:24:29.221913  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0921 22:24:29.826767  283599 system_pods.go:43] waiting for kube-system pods to appear ...
	I0921 22:24:29.834488  283599 system_pods.go:59] 9 kube-system pods found
	I0921 22:24:29.834517  283599 system_pods.go:61] "coredns-565d847f94-mrkjn" [7f364c47-74ce-4271-aab1-67bba320c586] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0921 22:24:29.834528  283599 system_pods.go:61] "etcd-default-k8s-different-port-20220921221118-10174" [8f0f58a7-7eae-43db-840f-bde95464e94e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0921 22:24:29.834533  283599 system_pods.go:61] "kindnet-7wbpp" [3f16ae0b-2f66-4f1e-b234-74570472a7f8] Running
	I0921 22:24:29.834539  283599 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220921221118-10174" [3a935d6b-ca77-4bcb-ae19-0a2af77c12a1] Running
	I0921 22:24:29.834544  283599 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220921221118-10174" [d01ee91a-5587-48e9-a235-68a73d5fedef] Running
	I0921 22:24:29.834549  283599 system_pods.go:61] "kube-proxy-lzphc" [611dbd37-0771-41b2-b886-93f46d79f802] Running
	I0921 22:24:29.834554  283599 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220921221118-10174" [998713da-f133-43f7-9f11-c6110ad66c8d] Running
	I0921 22:24:29.834561  283599 system_pods.go:61] "metrics-server-5c8fd5cf8-sshzh" [5972fae5-09c2-4e2e-b609-ef85f72311e4] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0921 22:24:29.834572  283599 system_pods.go:61] "storage-provisioner" [ca16dea1-fb3d-4cc1-b449-2236aefcc627] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0921 22:24:29.834577  283599 system_pods.go:74] duration metric: took 7.786123ms to wait for pod list to return data ...
	I0921 22:24:29.834588  283599 node_conditions.go:102] verifying NodePressure condition ...
	I0921 22:24:29.837059  283599 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0921 22:24:29.837085  283599 node_conditions.go:123] node cpu capacity is 8
	I0921 22:24:29.837096  283599 node_conditions.go:105] duration metric: took 2.500371ms to run NodePressure ...
	I0921 22:24:29.837121  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0921 22:24:30.025715  283599 kubeadm.go:763] waiting for restarted kubelet to initialise ...
	I0921 22:24:30.029542  283599 kubeadm.go:778] kubelet initialised
	I0921 22:24:30.029565  283599 kubeadm.go:779] duration metric: took 3.826857ms waiting for restarted kubelet to initialise ...
	I0921 22:24:30.029572  283599 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0921 22:24:30.034316  283599 pod_ready.go:78] waiting up to 4m0s for pod "coredns-565d847f94-mrkjn" in "kube-system" namespace to be "Ready" ...
	I0921 22:24:26.817684  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:29.317793  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:31.318001  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:30.289213  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:32.789335  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:32.039865  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:34.040511  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:36.539322  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:33.817371  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:35.817456  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:34.789530  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:37.289284  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:38.539700  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:41.040333  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:37.817967  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:40.318244  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:39.789967  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:42.289726  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:43.539636  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:45.540134  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:42.817716  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:44.818139  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:44.789355  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:47.288847  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:48.040425  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:50.539475  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:47.317825  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:49.318211  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:49.289182  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:51.289938  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:52.539590  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:54.540310  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:51.817491  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:53.818080  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:55.818165  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:53.789719  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:56.289013  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:57.040311  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:59.539775  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:58.318151  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:00.318254  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:58.289251  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:00.789124  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:02.789910  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:02.040207  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:04.540336  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:02.817283  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:04.817911  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:05.290121  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:07.789553  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:07.039774  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:09.039928  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:11.040136  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:07.318317  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:09.817957  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:10.289528  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:12.789110  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:13.540022  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:16.040513  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:12.317490  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:14.818433  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:14.789413  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:16.789947  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:18.539457  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:21.040423  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:17.317880  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:19.817565  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:19.289330  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:21.789335  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:23.539701  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:26.039677  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:22.317640  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:24.318075  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:23.789488  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:25.789726  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:28.539400  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:30.540154  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:26.817737  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:28.818270  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:31.318310  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:28.289323  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:30.789442  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:32.789667  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:33.039502  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:35.039801  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:33.318392  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:35.818247  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:34.790488  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:37.288758  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:37.539221  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:39.539681  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:41.539999  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:38.317564  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:40.317641  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:39.289052  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:41.789424  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:44.040284  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:46.540320  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:42.818080  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:45.317732  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:44.289331  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:46.789866  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:49.039837  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:51.540123  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:47.817565  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:49.314620  276511 pod_ready.go:81] duration metric: took 4m0.002300536s waiting for pod "coredns-565d847f94-m8xgt" in "kube-system" namespace to be "Ready" ...
	E0921 22:25:49.314670  276511 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-565d847f94-m8xgt" in "kube-system" namespace to be "Ready" (will not retry!)
	I0921 22:25:49.314692  276511 pod_ready.go:38] duration metric: took 4m0.007078344s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0921 22:25:49.314717  276511 kubeadm.go:631] restartCluster took 4m10.710033944s
	W0921 22:25:49.314858  276511 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0921 22:25:49.314887  276511 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0921 22:25:49.289362  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:51.789574  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:54.040292  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:56.540637  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:52.154431  276511 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (2.839517184s)
	I0921 22:25:52.154487  276511 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0921 22:25:52.163969  276511 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0921 22:25:52.170969  276511 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0921 22:25:52.171027  276511 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0921 22:25:52.177996  276511 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0921 22:25:52.178063  276511 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0921 22:25:52.213969  276511 kubeadm.go:317] W0921 22:25:52.213140    3321 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0921 22:25:52.246713  276511 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1017-gcp\n", err: exit status 1
	I0921 22:25:52.310910  276511 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0921 22:25:54.288796  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:56.289801  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:26:01.184243  276511 kubeadm.go:317] [init] Using Kubernetes version: v1.25.2
	I0921 22:26:01.184314  276511 kubeadm.go:317] [preflight] Running pre-flight checks
	I0921 22:26:01.184416  276511 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I0921 22:26:01.184507  276511 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1017-gcp
	I0921 22:26:01.184592  276511 kubeadm.go:317] OS: Linux
	I0921 22:26:01.184673  276511 kubeadm.go:317] CGROUPS_CPU: enabled
	I0921 22:26:01.184737  276511 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I0921 22:26:01.184793  276511 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I0921 22:26:01.184856  276511 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I0921 22:26:01.184921  276511 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I0921 22:26:01.184985  276511 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I0921 22:26:01.185046  276511 kubeadm.go:317] CGROUPS_PIDS: enabled
	I0921 22:26:01.185099  276511 kubeadm.go:317] CGROUPS_HUGETLB: enabled
	I0921 22:26:01.185157  276511 kubeadm.go:317] CGROUPS_BLKIO: enabled
	I0921 22:26:01.185254  276511 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0921 22:26:01.185380  276511 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0921 22:26:01.185526  276511 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0921 22:26:01.185623  276511 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0921 22:26:01.187463  276511 out.go:204]   - Generating certificates and keys ...
	I0921 22:26:01.187540  276511 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0921 22:26:01.187594  276511 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0921 22:26:01.187659  276511 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0921 22:26:01.187785  276511 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I0921 22:26:01.187900  276511 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I0921 22:26:01.187958  276511 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I0921 22:26:01.188014  276511 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I0921 22:26:01.188086  276511 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I0921 22:26:01.188221  276511 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0921 22:26:01.188336  276511 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0921 22:26:01.188409  276511 kubeadm.go:317] [certs] Using the existing "sa" key
	I0921 22:26:01.188488  276511 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0921 22:26:01.188556  276511 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0921 22:26:01.188636  276511 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0921 22:26:01.188731  276511 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0921 22:26:01.188817  276511 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0921 22:26:01.188953  276511 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0921 22:26:01.189087  276511 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0921 22:26:01.189191  276511 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I0921 22:26:01.189310  276511 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0921 22:26:01.191284  276511 out.go:204]   - Booting up control plane ...
	I0921 22:26:01.191385  276511 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0921 22:26:01.191486  276511 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0921 22:26:01.191561  276511 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0921 22:26:01.191748  276511 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0921 22:26:01.191985  276511 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0921 22:26:01.192105  276511 kubeadm.go:317] [apiclient] All control plane components are healthy after 6.503275 seconds
	I0921 22:26:01.192289  276511 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0921 22:26:01.192460  276511 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0921 22:26:01.192545  276511 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs
	I0921 22:26:01.192839  276511 kubeadm.go:317] [mark-control-plane] Marking the node no-preload-20220921220832-10174 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0921 22:26:01.192906  276511 kubeadm.go:317] [bootstrap-token] Using token: 9ldpwz.b05pw96cyce3l1nr
	I0921 22:26:01.194593  276511 out.go:204]   - Configuring RBAC rules ...
	I0921 22:26:01.194724  276511 kubeadm.go:317] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0921 22:26:01.194852  276511 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0921 22:26:01.195058  276511 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0921 22:26:01.195234  276511 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0921 22:26:01.195387  276511 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0921 22:26:01.195500  276511 kubeadm.go:317] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0921 22:26:01.195644  276511 kubeadm.go:317] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0921 22:26:01.195703  276511 kubeadm.go:317] [addons] Applied essential addon: CoreDNS
	I0921 22:26:01.195765  276511 kubeadm.go:317] [addons] Applied essential addon: kube-proxy
	I0921 22:26:01.195777  276511 kubeadm.go:317] 
	I0921 22:26:01.195861  276511 kubeadm.go:317] Your Kubernetes control-plane has initialized successfully!
	I0921 22:26:01.195872  276511 kubeadm.go:317] 
	I0921 22:26:01.195980  276511 kubeadm.go:317] To start using your cluster, you need to run the following as a regular user:
	I0921 22:26:01.196004  276511 kubeadm.go:317] 
	I0921 22:26:01.196036  276511 kubeadm.go:317]   mkdir -p $HOME/.kube
	I0921 22:26:01.196117  276511 kubeadm.go:317]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0921 22:26:01.196194  276511 kubeadm.go:317]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0921 22:26:01.196207  276511 kubeadm.go:317] 
	I0921 22:26:01.196286  276511 kubeadm.go:317] Alternatively, if you are the root user, you can run:
	I0921 22:26:01.196303  276511 kubeadm.go:317] 
	I0921 22:26:01.196379  276511 kubeadm.go:317]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0921 22:26:01.196404  276511 kubeadm.go:317] 
	I0921 22:26:01.196485  276511 kubeadm.go:317] You should now deploy a pod network to the cluster.
	I0921 22:26:01.196595  276511 kubeadm.go:317] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0921 22:26:01.196694  276511 kubeadm.go:317]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0921 22:26:01.196706  276511 kubeadm.go:317] 
	I0921 22:26:01.196820  276511 kubeadm.go:317] You can now join any number of control-plane nodes by copying certificate authorities
	I0921 22:26:01.196920  276511 kubeadm.go:317] and service account keys on each node and then running the following as root:
	I0921 22:26:01.196931  276511 kubeadm.go:317] 
	I0921 22:26:01.197032  276511 kubeadm.go:317]   kubeadm join control-plane.minikube.internal:8443 --token 9ldpwz.b05pw96cyce3l1nr \
	I0921 22:26:01.197181  276511 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:b419e756666ce2e30bdac31039f21ad7242d26e2b9c03a55c5bc6fe8dcfddcf7 \
	I0921 22:26:01.197220  276511 kubeadm.go:317] 	--control-plane 
	I0921 22:26:01.197231  276511 kubeadm.go:317] 
	I0921 22:26:01.197362  276511 kubeadm.go:317] Then you can join any number of worker nodes by running the following on each as root:
	I0921 22:26:01.197381  276511 kubeadm.go:317] 
	I0921 22:26:01.197495  276511 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8443 --token 9ldpwz.b05pw96cyce3l1nr \
	I0921 22:26:01.197628  276511 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:b419e756666ce2e30bdac31039f21ad7242d26e2b9c03a55c5bc6fe8dcfddcf7 
	I0921 22:26:01.197660  276511 cni.go:95] Creating CNI manager for ""
	I0921 22:26:01.197674  276511 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0921 22:26:01.199797  276511 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0921 22:25:59.039749  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:01.040507  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:01.201405  276511 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0921 22:26:01.205181  276511 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.2/kubectl ...
	I0921 22:26:01.205199  276511 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0921 22:26:01.218971  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0921 22:25:58.789397  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:26:00.789911  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:26:03.540344  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:06.039881  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:02.006490  276511 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0921 22:26:02.006560  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:02.006575  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl label nodes minikube.k8s.io/version=v1.27.0 minikube.k8s.io/commit=937c68716dfaac5b5ffa3b6655158d5d3472b8c4 minikube.k8s.io/name=no-preload-20220921220832-10174 minikube.k8s.io/updated_at=2022_09_21T22_26_02_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:02.013858  276511 ops.go:34] apiserver oom_adj: -16
	I0921 22:26:02.099832  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:02.694112  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:03.194089  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:03.693535  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:04.193854  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:04.693713  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:05.194101  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:05.694288  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:06.193619  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:06.693501  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:03.289345  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:26:05.789183  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:26:08.040230  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:10.539463  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:07.193590  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:07.693901  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:08.194072  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:08.694197  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:09.193914  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:09.693488  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:10.194416  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:10.693496  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:11.194435  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:11.694097  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:08.289258  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:26:10.789536  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:26:12.790035  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:26:12.194461  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:12.694279  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:13.193818  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:13.693711  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:13.758985  276511 kubeadm.go:1067] duration metric: took 11.752476269s to wait for elevateKubeSystemPrivileges.
	I0921 22:26:13.759013  276511 kubeadm.go:398] StartCluster complete in 4m35.198807914s
	I0921 22:26:13.759030  276511 settings.go:142] acquiring lock: {Name:mk2e017b0c75e33bad2b3a546d00214b38a21694 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:26:13.759144  276511 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
	I0921 22:26:13.760661  276511 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig: {Name:mkdec5faf1bb182b50a3cd5458b352f38075dc15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:26:14.276964  276511 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "no-preload-20220921220832-10174" rescaled to 1
	I0921 22:26:14.277021  276511 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0921 22:26:14.277060  276511 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0921 22:26:14.279846  276511 out.go:177] * Verifying Kubernetes components...
	I0921 22:26:14.277154  276511 addons.go:412] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0921 22:26:14.277306  276511 config.go:180] Loaded profile config "no-preload-20220921220832-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.2
	I0921 22:26:14.281313  276511 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0921 22:26:14.281349  276511 addons.go:65] Setting default-storageclass=true in profile "no-preload-20220921220832-10174"
	I0921 22:26:14.281359  276511 addons.go:65] Setting storage-provisioner=true in profile "no-preload-20220921220832-10174"
	I0921 22:26:14.281373  276511 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-20220921220832-10174"
	I0921 22:26:14.281387  276511 addons.go:65] Setting metrics-server=true in profile "no-preload-20220921220832-10174"
	I0921 22:26:14.281397  276511 addons.go:65] Setting dashboard=true in profile "no-preload-20220921220832-10174"
	I0921 22:26:14.281436  276511 addons.go:153] Setting addon dashboard=true in "no-preload-20220921220832-10174"
	W0921 22:26:14.281450  276511 addons.go:162] addon dashboard should already be in state true
	I0921 22:26:14.281497  276511 host.go:66] Checking if "no-preload-20220921220832-10174" exists ...
	I0921 22:26:14.281400  276511 addons.go:153] Setting addon metrics-server=true in "no-preload-20220921220832-10174"
	W0921 22:26:14.281576  276511 addons.go:162] addon metrics-server should already be in state true
	I0921 22:26:14.281377  276511 addons.go:153] Setting addon storage-provisioner=true in "no-preload-20220921220832-10174"
	I0921 22:26:14.281640  276511 host.go:66] Checking if "no-preload-20220921220832-10174" exists ...
	W0921 22:26:14.281653  276511 addons.go:162] addon storage-provisioner should already be in state true
	I0921 22:26:14.281684  276511 host.go:66] Checking if "no-preload-20220921220832-10174" exists ...
	I0921 22:26:14.281727  276511 cli_runner.go:164] Run: docker container inspect no-preload-20220921220832-10174 --format={{.State.Status}}
	I0921 22:26:14.282004  276511 cli_runner.go:164] Run: docker container inspect no-preload-20220921220832-10174 --format={{.State.Status}}
	I0921 22:26:14.282138  276511 cli_runner.go:164] Run: docker container inspect no-preload-20220921220832-10174 --format={{.State.Status}}
	I0921 22:26:14.282139  276511 cli_runner.go:164] Run: docker container inspect no-preload-20220921220832-10174 --format={{.State.Status}}
	I0921 22:26:14.321366  276511 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0921 22:26:14.323218  276511 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0921 22:26:14.323243  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0921 22:26:14.323321  276511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220832-10174
	I0921 22:26:14.323433  276511 addons.go:153] Setting addon default-storageclass=true in "no-preload-20220921220832-10174"
	W0921 22:26:14.323452  276511 addons.go:162] addon default-storageclass should already be in state true
	I0921 22:26:14.323478  276511 host.go:66] Checking if "no-preload-20220921220832-10174" exists ...
	I0921 22:26:14.323995  276511 cli_runner.go:164] Run: docker container inspect no-preload-20220921220832-10174 --format={{.State.Status}}
	I0921 22:26:14.331074  276511 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.6.0
	I0921 22:26:14.333243  276511 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0921 22:26:14.335670  276511 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0921 22:26:14.335699  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0921 22:26:14.335828  276511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220832-10174
	I0921 22:26:14.338700  276511 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0921 22:26:12.540251  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:15.040305  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:14.339971  276511 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0921 22:26:14.339996  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0921 22:26:14.340067  276511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220832-10174
	I0921 22:26:14.357088  276511 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0921 22:26:14.357118  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0921 22:26:14.357179  276511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220832-10174
	I0921 22:26:14.363845  276511 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49438 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/no-preload-20220921220832-10174/id_rsa Username:docker}
	I0921 22:26:14.373248  276511 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49438 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/no-preload-20220921220832-10174/id_rsa Username:docker}
	I0921 22:26:14.374001  276511 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49438 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/no-preload-20220921220832-10174/id_rsa Username:docker}
	I0921 22:26:14.403584  276511 node_ready.go:35] waiting up to 6m0s for node "no-preload-20220921220832-10174" to be "Ready" ...
	I0921 22:26:14.403673  276511 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0921 22:26:14.403710  276511 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49438 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/no-preload-20220921220832-10174/id_rsa Username:docker}
	I0921 22:26:14.597706  276511 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0921 22:26:14.597740  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0921 22:26:14.598185  276511 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0921 22:26:14.598208  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0921 22:26:14.678717  276511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0921 22:26:14.691157  276511 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0921 22:26:14.691190  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0921 22:26:14.776824  276511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0921 22:26:14.780103  276511 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0921 22:26:14.780131  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0921 22:26:14.796772  276511 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0921 22:26:14.796802  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0921 22:26:14.877240  276511 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0921 22:26:14.877270  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0921 22:26:14.886529  276511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0921 22:26:14.982072  276511 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0921 22:26:14.982106  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4206 bytes)
	I0921 22:26:15.083042  276511 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0921 22:26:15.083073  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0921 22:26:15.185025  276511 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0921 22:26:15.185058  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0921 22:26:15.288358  276511 start.go:810] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS
	I0921 22:26:15.295798  276511 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0921 22:26:15.295830  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0921 22:26:15.390667  276511 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0921 22:26:15.390693  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0921 22:26:15.415462  276511 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0921 22:26:15.415496  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0921 22:26:15.492343  276511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0921 22:26:15.887638  276511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.208874575s)
	I0921 22:26:15.887703  276511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.110843194s)
	I0921 22:26:15.982100  276511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.095511944s)
	I0921 22:26:15.982142  276511 addons.go:383] Verifying addon metrics-server=true in "no-preload-20220921220832-10174"
	I0921 22:26:16.410487  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:16.706261  276511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.213866962s)
	I0921 22:26:16.708800  276511 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0921 22:26:16.709899  276511 addons.go:414] enableAddons completed in 2.432760887s
	I0921 22:26:15.290491  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:26:17.789818  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:26:17.539620  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:20.039549  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:18.911099  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:21.409684  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:20.289226  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:26:22.292776  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:26:22.292799  265259 node_ready.go:38] duration metric: took 4m0.017444735s waiting for node "embed-certs-20220921220439-10174" to be "Ready" ...
	I0921 22:26:22.294631  265259 out.go:177] 
	W0921 22:26:22.296115  265259 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0921 22:26:22.296143  265259 out.go:239] * 
	W0921 22:26:22.296927  265259 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0921 22:26:22.298511  265259 out.go:177] 
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	6b599acb1664c       d921cee849482       About a minute ago   Running             kindnet-cni               1                   7b2148af52ea2
	9fa339f8a1798       d921cee849482       4 minutes ago        Exited              kindnet-cni               0                   7b2148af52ea2
	db2b32bf71cfd       1c7d8c51823b5       4 minutes ago        Running             kube-proxy                0                   56f73a44a0f43
	d9d6f00f601ad       97801f8394908       4 minutes ago        Running             kube-apiserver            2                   4de074ddb1303
	0e6e061bef128       ca0ea1ee3cfd3       4 minutes ago        Running             kube-scheduler            2                   9bf7c4d13f7cc
	e61defb21aca6       dbfceb93c69b6       4 minutes ago        Running             kube-controller-manager   2                   1aa2186d6444e
	cb8e747da8911       a8a176a5d5d69       4 minutes ago        Running             etcd                      2                   f0e82af2a9d13
	
	* 
	* ==> containerd <==
	* -- Logs begin at Wed 2022-09-21 22:17:29 UTC, end at Wed 2022-09-21 22:26:23 UTC. --
	Sep 21 22:22:22 embed-certs-20220921220439-10174 containerd[386]: time="2022-09-21T22:22:22.005786873Z" level=info msg="CreateContainer within sandbox \"56f73a44a0f438054787437cfd55cce9f49e0dfb71a5500656b5a9c6e1e643ca\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"db2b32bf71cfdfa827a9c9802b4d48659cbe2dbab4ee43a889088c3da006fd52\""
	Sep 21 22:22:22 embed-certs-20220921220439-10174 containerd[386]: time="2022-09-21T22:22:22.006435050Z" level=info msg="StartContainer for \"db2b32bf71cfdfa827a9c9802b4d48659cbe2dbab4ee43a889088c3da006fd52\""
	Sep 21 22:22:22 embed-certs-20220921220439-10174 containerd[386]: time="2022-09-21T22:22:22.085805540Z" level=info msg="StartContainer for \"db2b32bf71cfdfa827a9c9802b4d48659cbe2dbab4ee43a889088c3da006fd52\" returns successfully"
	Sep 21 22:22:22 embed-certs-20220921220439-10174 containerd[386]: time="2022-09-21T22:22:22.218989488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-ttwgn,Uid:64a9192e-6081-4b66-8bc3-28f897591f26,Namespace:kube-system,Attempt:0,}"
	Sep 21 22:22:22 embed-certs-20220921220439-10174 containerd[386]: time="2022-09-21T22:22:22.234987221Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 21 22:22:22 embed-certs-20220921220439-10174 containerd[386]: time="2022-09-21T22:22:22.235063190Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 21 22:22:22 embed-certs-20220921220439-10174 containerd[386]: time="2022-09-21T22:22:22.235072881Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 21 22:22:22 embed-certs-20220921220439-10174 containerd[386]: time="2022-09-21T22:22:22.235284016Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7b2148af52ea2517b63fc5e58407ab436d5b350d2305fb41d4aedbe50d2cf11e pid=4346 runtime=io.containerd.runc.v2
	Sep 21 22:22:22 embed-certs-20220921220439-10174 containerd[386]: time="2022-09-21T22:22:22.578752281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-ttwgn,Uid:64a9192e-6081-4b66-8bc3-28f897591f26,Namespace:kube-system,Attempt:0,} returns sandbox id \"7b2148af52ea2517b63fc5e58407ab436d5b350d2305fb41d4aedbe50d2cf11e\""
	Sep 21 22:22:22 embed-certs-20220921220439-10174 containerd[386]: time="2022-09-21T22:22:22.583431192Z" level=info msg="CreateContainer within sandbox \"7b2148af52ea2517b63fc5e58407ab436d5b350d2305fb41d4aedbe50d2cf11e\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:0,}"
	Sep 21 22:22:22 embed-certs-20220921220439-10174 containerd[386]: time="2022-09-21T22:22:22.598102477Z" level=info msg="CreateContainer within sandbox \"7b2148af52ea2517b63fc5e58407ab436d5b350d2305fb41d4aedbe50d2cf11e\" for &ContainerMetadata{Name:kindnet-cni,Attempt:0,} returns container id \"9fa339f8a17988ae47ba53e5a834118b5286058169e096284e7c50ac173f6bb0\""
	Sep 21 22:22:22 embed-certs-20220921220439-10174 containerd[386]: time="2022-09-21T22:22:22.600669577Z" level=info msg="StartContainer for \"9fa339f8a17988ae47ba53e5a834118b5286058169e096284e7c50ac173f6bb0\""
	Sep 21 22:22:22 embed-certs-20220921220439-10174 containerd[386]: time="2022-09-21T22:22:22.981870296Z" level=info msg="StartContainer for \"9fa339f8a17988ae47ba53e5a834118b5286058169e096284e7c50ac173f6bb0\" returns successfully"
	Sep 21 22:23:08 embed-certs-20220921220439-10174 containerd[386]: time="2022-09-21T22:23:08.500684649Z" level=error msg="ContainerStatus for \"10e47864d91742a9935eea8f843db78f802e6b008ae1571d165fafe3079c23ad\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"10e47864d91742a9935eea8f843db78f802e6b008ae1571d165fafe3079c23ad\": not found"
	Sep 21 22:23:08 embed-certs-20220921220439-10174 containerd[386]: time="2022-09-21T22:23:08.501204436Z" level=error msg="ContainerStatus for \"f7c0b7e7cbc11ecf7c3b44b32443a76667c11cebe29e5b51d18dd040c3fb7538\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f7c0b7e7cbc11ecf7c3b44b32443a76667c11cebe29e5b51d18dd040c3fb7538\": not found"
	Sep 21 22:23:08 embed-certs-20220921220439-10174 containerd[386]: time="2022-09-21T22:23:08.501756013Z" level=error msg="ContainerStatus for \"2c132c99660ac3b6987754acaccbc87f631bc9ffc4dade2b77ad96eef8d04334\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2c132c99660ac3b6987754acaccbc87f631bc9ffc4dade2b77ad96eef8d04334\": not found"
	Sep 21 22:23:08 embed-certs-20220921220439-10174 containerd[386]: time="2022-09-21T22:23:08.502381997Z" level=error msg="ContainerStatus for \"ba693d75dcfc4d110194e67fe98a19c38b31807c74cc50bcdf9b73ae2677dde1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ba693d75dcfc4d110194e67fe98a19c38b31807c74cc50bcdf9b73ae2677dde1\": not found"
	Sep 21 22:25:03 embed-certs-20220921220439-10174 containerd[386]: time="2022-09-21T22:25:03.501261687Z" level=info msg="shim disconnected" id=9fa339f8a17988ae47ba53e5a834118b5286058169e096284e7c50ac173f6bb0
	Sep 21 22:25:03 embed-certs-20220921220439-10174 containerd[386]: time="2022-09-21T22:25:03.501340003Z" level=warning msg="cleaning up after shim disconnected" id=9fa339f8a17988ae47ba53e5a834118b5286058169e096284e7c50ac173f6bb0 namespace=k8s.io
	Sep 21 22:25:03 embed-certs-20220921220439-10174 containerd[386]: time="2022-09-21T22:25:03.501354822Z" level=info msg="cleaning up dead shim"
	Sep 21 22:25:03 embed-certs-20220921220439-10174 containerd[386]: time="2022-09-21T22:25:03.510845440Z" level=warning msg="cleanup warnings time=\"2022-09-21T22:25:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4765 runtime=io.containerd.runc.v2\n"
	Sep 21 22:25:04 embed-certs-20220921220439-10174 containerd[386]: time="2022-09-21T22:25:04.078240734Z" level=info msg="CreateContainer within sandbox \"7b2148af52ea2517b63fc5e58407ab436d5b350d2305fb41d4aedbe50d2cf11e\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:1,}"
	Sep 21 22:25:04 embed-certs-20220921220439-10174 containerd[386]: time="2022-09-21T22:25:04.091437991Z" level=info msg="CreateContainer within sandbox \"7b2148af52ea2517b63fc5e58407ab436d5b350d2305fb41d4aedbe50d2cf11e\" for &ContainerMetadata{Name:kindnet-cni,Attempt:1,} returns container id \"6b599acb1664c2790e259fbd46aeea9d1c71d8d2658a062f2db94e88a20513ae\""
	Sep 21 22:25:04 embed-certs-20220921220439-10174 containerd[386]: time="2022-09-21T22:25:04.092080895Z" level=info msg="StartContainer for \"6b599acb1664c2790e259fbd46aeea9d1c71d8d2658a062f2db94e88a20513ae\""
	Sep 21 22:25:04 embed-certs-20220921220439-10174 containerd[386]: time="2022-09-21T22:25:04.192859987Z" level=info msg="StartContainer for \"6b599acb1664c2790e259fbd46aeea9d1c71d8d2658a062f2db94e88a20513ae\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-20220921220439-10174
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-20220921220439-10174
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=937c68716dfaac5b5ffa3b6655158d5d3472b8c4
	                    minikube.k8s.io/name=embed-certs-20220921220439-10174
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_09_21T22_22_09_0700
	                    minikube.k8s.io/version=v1.27.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 21 Sep 2022 22:22:05 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-20220921220439-10174
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 21 Sep 2022 22:26:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 21 Sep 2022 22:22:18 +0000   Wed, 21 Sep 2022 22:22:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 21 Sep 2022 22:22:18 +0000   Wed, 21 Sep 2022 22:22:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 21 Sep 2022 22:22:18 +0000   Wed, 21 Sep 2022 22:22:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 21 Sep 2022 22:22:18 +0000   Wed, 21 Sep 2022 22:22:02 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    embed-certs-20220921220439-10174
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871748Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871748Ki
	  pods:               110
	System Info:
	  Machine ID:                 40026c506a9948afaae3b0fda0a61c83
	  System UUID:                39299add-007b-4517-8e1f-4d420ff2375f
	  Boot ID:                    878f3e16-9143-42f3-b848-1a740c7f26bf
	  Kernel Version:             5.15.0-1017-gcp
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.6.8
	  Kubelet Version:            v1.25.2
	  Kube-Proxy Version:         v1.25.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                        ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-embed-certs-20220921220439-10174                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         4m15s
	  kube-system                 kindnet-ttwgn                                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m2s
	  kube-system                 kube-apiserver-embed-certs-20220921220439-10174             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m16s
	  kube-system                 kube-controller-manager-embed-certs-20220921220439-10174    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m15s
	  kube-system                 kube-proxy-rmkm2                                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m2s
	  kube-system                 kube-scheduler-embed-certs-20220921220439-10174             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m1s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  4m22s (x4 over 4m22s)  kubelet          Node embed-certs-20220921220439-10174 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m22s (x4 over 4m22s)  kubelet          Node embed-certs-20220921220439-10174 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m22s (x4 over 4m22s)  kubelet          Node embed-certs-20220921220439-10174 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m15s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m15s                  kubelet          Node embed-certs-20220921220439-10174 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m15s                  kubelet          Node embed-certs-20220921220439-10174 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m15s                  kubelet          Node embed-certs-20220921220439-10174 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m15s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m2s                   node-controller  Node embed-certs-20220921220439-10174 event: Registered Node embed-certs-20220921220439-10174 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000005] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +2.959863] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.003881] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.023897] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[Sep21 22:10] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.005087] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[Sep21 22:11] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +2.967845] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.031851] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.027935] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +2.943864] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.019893] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.023889] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	
	* 
	* ==> etcd [cb8e747da8911d7b0690bd1e54febfd721e32f467db89180c1209c4921e49ee5] <==
	* {"level":"info","ts":"2022-09-21T22:22:02.388Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 switched to configuration voters=(9694253945895198663)"}
	{"level":"info","ts":"2022-09-21T22:22:02.389Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","added-peer-id":"8688e899f7831fc7","added-peer-peer-urls":["https://192.168.67.2:2380"]}
	{"level":"info","ts":"2022-09-21T22:22:02.391Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-09-21T22:22:02.391Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-09-21T22:22:02.391Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-09-21T22:22:02.391Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-09-21T22:22:02.391Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-09-21T22:22:02.979Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 1"}
	{"level":"info","ts":"2022-09-21T22:22:02.979Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-09-21T22:22:02.980Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 1"}
	{"level":"info","ts":"2022-09-21T22:22:02.980Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 2"}
	{"level":"info","ts":"2022-09-21T22:22:02.980Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2022-09-21T22:22:02.980Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 2"}
	{"level":"info","ts":"2022-09-21T22:22:02.980Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2022-09-21T22:22:02.980Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-21T22:22:02.981Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-21T22:22:02.981Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-21T22:22:02.981Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-21T22:22:02.981Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-09-21T22:22:02.981Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:embed-certs-20220921220439-10174 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2022-09-21T22:22:02.981Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-09-21T22:22:02.981Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-09-21T22:22:02.981Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-09-21T22:22:02.982Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-09-21T22:22:02.982Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	
	* 
	* ==> kernel <==
	*  22:26:23 up  1:08,  0 users,  load average: 0.44, 0.96, 1.56
	Linux embed-certs-20220921220439-10174 5.15.0-1017-gcp #23~20.04.2-Ubuntu SMP Wed Aug 17 02:46:40 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kube-apiserver [d9d6f00f601ad90b6215ac35efe6ec71385b625769daced48a17a5f76c90cc37] <==
	* I0921 22:22:21.562978       1 controller.go:616] quota admission added evaluator for: controllerrevisions.apps
	I0921 22:22:23.711161       1 alloc.go:327] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs=map[IPv4:10.98.215.254]
	I0921 22:22:24.086016       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.109.1.23]
	I0921 22:22:24.101905       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.97.142.242]
	W0921 22:22:24.592679       1 handler_proxy.go:105] no RequestInfo found in the context
	W0921 22:22:24.592718       1 handler_proxy.go:105] no RequestInfo found in the context
	E0921 22:22:24.592736       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0921 22:22:24.592742       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0921 22:22:24.592818       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0921 22:22:24.593975       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0921 22:23:24.593609       1 handler_proxy.go:105] no RequestInfo found in the context
	E0921 22:23:24.593655       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0921 22:23:24.593661       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0921 22:23:24.594668       1 handler_proxy.go:105] no RequestInfo found in the context
	E0921 22:23:24.594715       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0921 22:23:24.594729       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0921 22:25:24.593944       1 handler_proxy.go:105] no RequestInfo found in the context
	E0921 22:25:24.593995       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0921 22:25:24.594001       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0921 22:25:24.595125       1 handler_proxy.go:105] no RequestInfo found in the context
	E0921 22:25:24.595196       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0921 22:25:24.595210       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [e61defb21aca6380b947a10f6c1b57bbbad3be0a94605532918f46b10500a1e1] <==
	* I0921 22:22:23.985766       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-54596f475f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-54596f475f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0921 22:22:23.987485       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-7b94984548" failed with pods "dashboard-metrics-scraper-7b94984548-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0921 22:22:23.987507       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7b94984548" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-7b94984548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0921 22:22:23.991588       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-54596f475f" failed with pods "kubernetes-dashboard-54596f475f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0921 22:22:23.991600       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-54596f475f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-54596f475f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0921 22:22:23.996820       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-7b94984548" failed with pods "dashboard-metrics-scraper-7b94984548-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0921 22:22:23.996828       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7b94984548" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-7b94984548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0921 22:22:24.015183       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-54596f475f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-54596f475f-nbnhj"
	I0921 22:22:24.079821       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7b94984548" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-7b94984548-xnlgm"
	E0921 22:22:51.213010       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0921 22:22:51.686876       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0921 22:23:21.219468       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0921 22:23:21.697584       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0921 22:23:51.226343       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0921 22:23:51.709131       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0921 22:24:21.232072       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0921 22:24:21.721419       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0921 22:24:51.238537       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0921 22:24:51.732481       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0921 22:25:21.243365       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0921 22:25:21.742690       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0921 22:25:51.250138       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0921 22:25:51.753471       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0921 22:26:21.257430       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0921 22:26:21.763311       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [db2b32bf71cfdfa827a9c9802b4d48659cbe2dbab4ee43a889088c3da006fd52] <==
	* I0921 22:22:22.120381       1 node.go:163] Successfully retrieved node IP: 192.168.67.2
	I0921 22:22:22.120449       1 server_others.go:138] "Detected node IP" address="192.168.67.2"
	I0921 22:22:22.120475       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0921 22:22:22.139843       1 server_others.go:206] "Using iptables Proxier"
	I0921 22:22:22.139879       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0921 22:22:22.139897       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0921 22:22:22.139916       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0921 22:22:22.139949       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0921 22:22:22.140085       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0921 22:22:22.140287       1 server.go:661] "Version info" version="v1.25.2"
	I0921 22:22:22.140312       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0921 22:22:22.140854       1 config.go:226] "Starting endpoint slice config controller"
	I0921 22:22:22.140865       1 config.go:317] "Starting service config controller"
	I0921 22:22:22.140875       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0921 22:22:22.140883       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0921 22:22:22.140923       1 config.go:444] "Starting node config controller"
	I0921 22:22:22.141091       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0921 22:22:22.241043       1 shared_informer.go:262] Caches are synced for service config
	I0921 22:22:22.241060       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0921 22:22:22.241179       1 shared_informer.go:262] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [0e6e061bef128b505ba44db28f6d8a49a4912fe2cd4fe925288aa43db0ff17fe] <==
	* W0921 22:22:05.798787       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0921 22:22:05.798970       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0921 22:22:05.799229       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0921 22:22:05.799254       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0921 22:22:05.799327       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0921 22:22:05.799345       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0921 22:22:05.799401       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0921 22:22:05.799421       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0921 22:22:05.799526       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0921 22:22:05.799545       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0921 22:22:05.799595       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0921 22:22:05.799617       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0921 22:22:06.630521       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0921 22:22:06.630561       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0921 22:22:06.631392       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0921 22:22:06.631423       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0921 22:22:06.746394       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0921 22:22:06.746460       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0921 22:22:06.788783       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0921 22:22:06.788828       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0921 22:22:06.796264       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0921 22:22:06.796302       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0921 22:22:06.808414       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0921 22:22:06.808454       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0921 22:22:07.395625       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2022-09-21 22:17:29 UTC, end at Wed 2022-09-21 22:26:23 UTC. --
	Sep 21 22:24:23 embed-certs-20220921220439-10174 kubelet[3842]: E0921 22:24:23.907628    3842 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:24:28 embed-certs-20220921220439-10174 kubelet[3842]: E0921 22:24:28.909307    3842 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:24:33 embed-certs-20220921220439-10174 kubelet[3842]: E0921 22:24:33.910075    3842 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:24:38 embed-certs-20220921220439-10174 kubelet[3842]: E0921 22:24:38.911476    3842 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:24:43 embed-certs-20220921220439-10174 kubelet[3842]: E0921 22:24:43.912463    3842 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:24:48 embed-certs-20220921220439-10174 kubelet[3842]: E0921 22:24:48.914145    3842 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:24:53 embed-certs-20220921220439-10174 kubelet[3842]: E0921 22:24:53.915773    3842 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:24:58 embed-certs-20220921220439-10174 kubelet[3842]: E0921 22:24:58.916686    3842 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:25:03 embed-certs-20220921220439-10174 kubelet[3842]: E0921 22:25:03.917812    3842 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:25:04 embed-certs-20220921220439-10174 kubelet[3842]: I0921 22:25:04.075616    3842 scope.go:115] "RemoveContainer" containerID="9fa339f8a17988ae47ba53e5a834118b5286058169e096284e7c50ac173f6bb0"
	Sep 21 22:25:08 embed-certs-20220921220439-10174 kubelet[3842]: E0921 22:25:08.919547    3842 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:25:13 embed-certs-20220921220439-10174 kubelet[3842]: E0921 22:25:13.920435    3842 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:25:18 embed-certs-20220921220439-10174 kubelet[3842]: E0921 22:25:18.921156    3842 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:25:23 embed-certs-20220921220439-10174 kubelet[3842]: E0921 22:25:23.922745    3842 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:25:28 embed-certs-20220921220439-10174 kubelet[3842]: E0921 22:25:28.924503    3842 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:25:33 embed-certs-20220921220439-10174 kubelet[3842]: E0921 22:25:33.925425    3842 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:25:38 embed-certs-20220921220439-10174 kubelet[3842]: E0921 22:25:38.926251    3842 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:25:43 embed-certs-20220921220439-10174 kubelet[3842]: E0921 22:25:43.927699    3842 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:25:48 embed-certs-20220921220439-10174 kubelet[3842]: E0921 22:25:48.929159    3842 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:25:53 embed-certs-20220921220439-10174 kubelet[3842]: E0921 22:25:53.930644    3842 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:25:58 embed-certs-20220921220439-10174 kubelet[3842]: E0921 22:25:58.932119    3842 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:26:03 embed-certs-20220921220439-10174 kubelet[3842]: E0921 22:26:03.933581    3842 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:26:08 embed-certs-20220921220439-10174 kubelet[3842]: E0921 22:26:08.935029    3842 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:26:13 embed-certs-20220921220439-10174 kubelet[3842]: E0921 22:26:13.936033    3842 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:26:18 embed-certs-20220921220439-10174 kubelet[3842]: E0921 22:26:18.936765    3842 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20220921220439-10174 -n embed-certs-20220921220439-10174
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-20220921220439-10174 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: coredns-565d847f94-9lkvq metrics-server-5c8fd5cf8-mplqh storage-provisioner dashboard-metrics-scraper-7b94984548-xnlgm kubernetes-dashboard-54596f475f-nbnhj
helpers_test.go:272: ======> post-mortem[TestStartStop/group/embed-certs/serial/SecondStart]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context embed-certs-20220921220439-10174 describe pod coredns-565d847f94-9lkvq metrics-server-5c8fd5cf8-mplqh storage-provisioner dashboard-metrics-scraper-7b94984548-xnlgm kubernetes-dashboard-54596f475f-nbnhj
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context embed-certs-20220921220439-10174 describe pod coredns-565d847f94-9lkvq metrics-server-5c8fd5cf8-mplqh storage-provisioner dashboard-metrics-scraper-7b94984548-xnlgm kubernetes-dashboard-54596f475f-nbnhj: exit status 1 (60.86733ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-565d847f94-9lkvq" not found
	Error from server (NotFound): pods "metrics-server-5c8fd5cf8-mplqh" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-7b94984548-xnlgm" not found
	Error from server (NotFound): pods "kubernetes-dashboard-54596f475f-nbnhj" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context embed-certs-20220921220439-10174 describe pod coredns-565d847f94-9lkvq metrics-server-5c8fd5cf8-mplqh storage-provisioner dashboard-metrics-scraper-7b94984548-xnlgm kubernetes-dashboard-54596f475f-nbnhj: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (536.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (534.79s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-20220921220832-10174 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.2
E0921 22:21:38.505219   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/ingress-addon-legacy-20220921213512-10174/client.crt: no such file or directory
E0921 22:21:59.250151   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/bridge-20220921215523-10174/client.crt: no such file or directory
E0921 22:22:05.088927   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/old-k8s-version-20220921220722-10174/client.crt: no such file or directory
E0921 22:22:25.192557   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/cilium-20220921215524-10174/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p no-preload-20220921220832-10174 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.2: exit status 80 (8m52.758659385s)

                                                
                                                
-- stdout --
	* [no-preload-20220921220832-10174] minikube v1.27.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14995
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on existing profile
	* Starting control plane node no-preload-20220921220832-10174 in cluster no-preload-20220921220832-10174
	* Pulling base image ...
	* Restarting existing docker container for "no-preload-20220921220832-10174" ...
	* Preparing Kubernetes v1.25.2 on containerd 1.6.8 ...
	* Configuring CNI (Container Networking Interface) ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	  - Using image docker.io/kubernetesui/dashboard:v2.6.0
	  - Using image k8s.gcr.io/echoserver:1.4
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0921 22:21:21.729027  276511 out.go:296] Setting OutFile to fd 1 ...
	I0921 22:21:21.729174  276511 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:21:21.729189  276511 out.go:309] Setting ErrFile to fd 2...
	I0921 22:21:21.729194  276511 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:21:21.729308  276511 root.go:333] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/bin
	I0921 22:21:21.729870  276511 out.go:303] Setting JSON to false
	I0921 22:21:21.731566  276511 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3833,"bootTime":1663795049,"procs":716,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1017-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0921 22:21:21.731629  276511 start.go:125] virtualization: kvm guest
	I0921 22:21:21.734495  276511 out.go:177] * [no-preload-20220921220832-10174] minikube v1.27.0 on Ubuntu 20.04 (kvm/amd64)
	I0921 22:21:21.736412  276511 notify.go:214] Checking for updates...
	I0921 22:21:21.737826  276511 out.go:177]   - MINIKUBE_LOCATION=14995
	I0921 22:21:21.739371  276511 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0921 22:21:21.740848  276511 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
	I0921 22:21:21.742164  276511 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube
	I0921 22:21:21.743463  276511 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0921 22:21:21.745159  276511 config.go:180] Loaded profile config "no-preload-20220921220832-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.2
	I0921 22:21:21.745572  276511 driver.go:365] Setting default libvirt URI to qemu:///system
	I0921 22:21:21.776785  276511 docker.go:137] docker version: linux-20.10.18
	I0921 22:21:21.776874  276511 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:21:21.873005  276511 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2022-09-21 22:21:21.797949632 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1017-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.18 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0921 22:21:21.873105  276511 docker.go:254] overlay module found
	I0921 22:21:21.875489  276511 out.go:177] * Using the docker driver based on existing profile
	I0921 22:21:21.876982  276511 start.go:284] selected driver: docker
	I0921 22:21:21.877000  276511 start.go:808] validating driver "docker" against &{Name:no-preload-20220921220832-10174 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:no-preload-20220921220832-10174 Namespace:default APIServerName:mi
nikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-hos
t Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 22:21:21.877104  276511 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0921 22:21:21.877949  276511 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:21:21.972195  276511 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2022-09-21 22:21:21.898685177 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1017-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.18 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0921 22:21:21.972596  276511 start_flags.go:867] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0921 22:21:21.972625  276511 cni.go:95] Creating CNI manager for ""
	I0921 22:21:21.972634  276511 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0921 22:21:21.972657  276511 start_flags.go:316] config:
	{Name:no-preload-20220921220832-10174 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:no-preload-20220921220832-10174 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 22:21:21.975206  276511 out.go:177] * Starting control plane node no-preload-20220921220832-10174 in cluster no-preload-20220921220832-10174
	I0921 22:21:21.976541  276511 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0921 22:21:21.978261  276511 out.go:177] * Pulling base image ...
	I0921 22:21:21.979898  276511 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime containerd
	I0921 22:21:21.980011  276511 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	I0921 22:21:21.980055  276511 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/no-preload-20220921220832-10174/config.json ...
	I0921 22:21:21.980230  276511 cache.go:107] acquiring lock: {Name:mk964a2e66a5444defeab854e6434a6f27bdb527 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:21:21.980240  276511 cache.go:107] acquiring lock: {Name:mka10a341c76ae214d12cf65b1bbb970ff641c5f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:21:21.980291  276511 cache.go:107] acquiring lock: {Name:mkb5c943b9da9e6c7ecc443b377ab990272f1b2f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:21:21.980336  276511 cache.go:107] acquiring lock: {Name:mk944562b9b2415f3d8e7ad36b373f92205bdb5c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:21:21.980366  276511 cache.go:107] acquiring lock: {Name:mk6ae321142fb89935897137e30217f9ae2499ae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:21:21.980402  276511 cache.go:107] acquiring lock: {Name:mk0eb3fbf1ee9e76ad78bfdee22277edae17ed2a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:21:21.980366  276511 cache.go:107] acquiring lock: {Name:mk4fab6516978f221b8246a61f380f8ab97f066c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:21:21.980335  276511 cache.go:107] acquiring lock: {Name:mkee4799116b59e3f65d0127cdad0c25a01a05e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:21:21.980556  276511 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.25.2 exists
	I0921 22:21:21.980581  276511 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.9.3 exists
	I0921 22:21:21.980559  276511 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.25.2 exists
	I0921 22:21:21.980583  276511 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0921 22:21:21.980592  276511 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.25.2" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.25.2" took 362.285µs
	I0921 22:21:21.980608  276511 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.25.2 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.25.2 succeeded
	I0921 22:21:21.980603  276511 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.9.3" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.9.3" took 272.508µs
	I0921 22:21:21.980617  276511 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.9.3 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.9.3 succeeded
	I0921 22:21:21.980614  276511 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 397.033µs
	I0921 22:21:21.980610  276511 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.25.2" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.25.2" took 300.17µs
	I0921 22:21:21.980625  276511 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0921 22:21:21.980629  276511 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.25.2 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.25.2 succeeded
	I0921 22:21:21.980647  276511 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.25.2 exists
	I0921 22:21:21.980673  276511 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.25.2" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.25.2" took 420.957µs
	I0921 22:21:21.980689  276511 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.25.2 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.25.2 succeeded
	I0921 22:21:21.980713  276511 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.4-0 exists
	I0921 22:21:21.980730  276511 cache.go:96] cache image "registry.k8s.io/etcd:3.5.4-0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.4-0" took 401.678µs
	I0921 22:21:21.980744  276511 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.4-0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.4-0 succeeded
	I0921 22:21:21.980757  276511 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/pause_3.8 exists
	I0921 22:21:21.980790  276511 cache.go:96] cache image "registry.k8s.io/pause:3.8" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/pause_3.8" took 470.77µs
	I0921 22:21:21.980807  276511 cache.go:80] save to tar file registry.k8s.io/pause:3.8 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/pause_3.8 succeeded
	I0921 22:21:21.980833  276511 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.25.2 exists
	I0921 22:21:21.980848  276511 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.25.2" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.25.2" took 492.866µs
	I0921 22:21:21.980861  276511 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.25.2 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.25.2 succeeded
	I0921 22:21:21.980876  276511 cache.go:87] Successfully saved all images to host disk.
	I0921 22:21:22.004613  276511 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon, skipping pull
	I0921 22:21:22.004656  276511 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c exists in daemon, skipping load
	I0921 22:21:22.004676  276511 cache.go:208] Successfully downloaded all kic artifacts
	I0921 22:21:22.004708  276511 start.go:364] acquiring machines lock for no-preload-20220921220832-10174: {Name:mk189db360f5ac486cb35206c34214af6d1c65b0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:21:22.004793  276511 start.go:368] acquired machines lock for "no-preload-20220921220832-10174" in 64.56µs
	I0921 22:21:22.004813  276511 start.go:96] Skipping create...Using existing machine configuration
	I0921 22:21:22.004818  276511 fix.go:55] fixHost starting: 
	I0921 22:21:22.005039  276511 cli_runner.go:164] Run: docker container inspect no-preload-20220921220832-10174 --format={{.State.Status}}
	I0921 22:21:22.028746  276511 fix.go:103] recreateIfNeeded on no-preload-20220921220832-10174: state=Stopped err=<nil>
	W0921 22:21:22.028785  276511 fix.go:129] unexpected machine state, will restart: <nil>
	I0921 22:21:22.031134  276511 out.go:177] * Restarting existing docker container for "no-preload-20220921220832-10174" ...
	I0921 22:21:22.032731  276511 cli_runner.go:164] Run: docker start no-preload-20220921220832-10174
	I0921 22:21:22.397294  276511 cli_runner.go:164] Run: docker container inspect no-preload-20220921220832-10174 --format={{.State.Status}}
	I0921 22:21:22.425241  276511 kic.go:415] container "no-preload-20220921220832-10174" state is running.
	I0921 22:21:22.425628  276511 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20220921220832-10174
	I0921 22:21:22.452469  276511 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/no-preload-20220921220832-10174/config.json ...
	I0921 22:21:22.452688  276511 machine.go:88] provisioning docker machine ...
	I0921 22:21:22.452713  276511 ubuntu.go:169] provisioning hostname "no-preload-20220921220832-10174"
	I0921 22:21:22.452750  276511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220832-10174
	I0921 22:21:22.481744  276511 main.go:134] libmachine: Using SSH client type: native
	I0921 22:21:22.481925  276511 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ec520] 0x7ef6a0 <nil>  [] 0s} 127.0.0.1 49438 <nil> <nil>}
	I0921 22:21:22.481949  276511 main.go:134] libmachine: About to run SSH command:
	sudo hostname no-preload-20220921220832-10174 && echo "no-preload-20220921220832-10174" | sudo tee /etc/hostname
	I0921 22:21:22.482598  276511 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35926->127.0.0.1:49438: read: connection reset by peer
	I0921 22:21:25.619844  276511 main.go:134] libmachine: SSH cmd err, output: <nil>: no-preload-20220921220832-10174
	
	I0921 22:21:25.619917  276511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220832-10174
	I0921 22:21:25.644377  276511 main.go:134] libmachine: Using SSH client type: native
	I0921 22:21:25.644520  276511 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ec520] 0x7ef6a0 <nil>  [] 0s} 127.0.0.1 49438 <nil> <nil>}
	I0921 22:21:25.644541  276511 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-20220921220832-10174' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-20220921220832-10174/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-20220921220832-10174' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0921 22:21:25.771438  276511 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0921 22:21:25.771470  276511 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube}
	I0921 22:21:25.771545  276511 ubuntu.go:177] setting up certificates
	I0921 22:21:25.771554  276511 provision.go:83] configureAuth start
	I0921 22:21:25.771606  276511 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20220921220832-10174
	I0921 22:21:25.795693  276511 provision.go:138] copyHostCerts
	I0921 22:21:25.795778  276511 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.pem, removing ...
	I0921 22:21:25.795798  276511 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.pem
	I0921 22:21:25.795864  276511 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.pem (1078 bytes)
	I0921 22:21:25.795944  276511 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cert.pem, removing ...
	I0921 22:21:25.795955  276511 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cert.pem
	I0921 22:21:25.795981  276511 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cert.pem (1123 bytes)
	I0921 22:21:25.796035  276511 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/key.pem, removing ...
	I0921 22:21:25.796044  276511 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/key.pem
	I0921 22:21:25.796066  276511 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/key.pem (1675 bytes)
	I0921 22:21:25.796151  276511 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca-key.pem org=jenkins.no-preload-20220921220832-10174 san=[192.168.94.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-20220921220832-10174]
	I0921 22:21:25.980041  276511 provision.go:172] copyRemoteCerts
	I0921 22:21:25.980129  276511 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0921 22:21:25.980174  276511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220832-10174
	I0921 22:21:26.005654  276511 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49438 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/no-preload-20220921220832-10174/id_rsa Username:docker}
	I0921 22:21:26.099196  276511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0921 22:21:26.116665  276511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0921 22:21:26.133700  276511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0921 22:21:26.150095  276511 provision.go:86] duration metric: configureAuth took 378.527139ms
	I0921 22:21:26.150126  276511 ubuntu.go:193] setting minikube options for container-runtime
	I0921 22:21:26.150282  276511 config.go:180] Loaded profile config "no-preload-20220921220832-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.2
	I0921 22:21:26.150293  276511 machine.go:91] provisioned docker machine in 3.697591605s
	I0921 22:21:26.150301  276511 start.go:300] post-start starting for "no-preload-20220921220832-10174" (driver="docker")
	I0921 22:21:26.150307  276511 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0921 22:21:26.150350  276511 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0921 22:21:26.150391  276511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220832-10174
	I0921 22:21:26.177098  276511 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49438 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/no-preload-20220921220832-10174/id_rsa Username:docker}
	I0921 22:21:26.266994  276511 ssh_runner.go:195] Run: cat /etc/os-release
	I0921 22:21:26.269733  276511 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0921 22:21:26.269758  276511 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0921 22:21:26.269766  276511 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0921 22:21:26.269773  276511 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0921 22:21:26.269784  276511 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/addons for local assets ...
	I0921 22:21:26.269843  276511 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files for local assets ...
	I0921 22:21:26.269931  276511 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/101742.pem -> 101742.pem in /etc/ssl/certs
	I0921 22:21:26.270038  276511 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0921 22:21:26.276595  276511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/101742.pem --> /etc/ssl/certs/101742.pem (1708 bytes)
	I0921 22:21:26.293384  276511 start.go:303] post-start completed in 143.069577ms
	I0921 22:21:26.293459  276511 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:21:26.293509  276511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220832-10174
	I0921 22:21:26.319279  276511 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49438 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/no-preload-20220921220832-10174/id_rsa Username:docker}
	I0921 22:21:26.412318  276511 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:21:26.416228  276511 fix.go:57] fixHost completed within 4.411406055s
	I0921 22:21:26.416252  276511 start.go:83] releasing machines lock for "no-preload-20220921220832-10174", held for 4.411447835s
	I0921 22:21:26.416336  276511 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20220921220832-10174
	I0921 22:21:26.439824  276511 ssh_runner.go:195] Run: systemctl --version
	I0921 22:21:26.439875  276511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220832-10174
	I0921 22:21:26.439894  276511 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0921 22:21:26.439973  276511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220832-10174
	I0921 22:21:26.463981  276511 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49438 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/no-preload-20220921220832-10174/id_rsa Username:docker}
	I0921 22:21:26.464292  276511 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49438 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/no-preload-20220921220832-10174/id_rsa Username:docker}
	I0921 22:21:26.585502  276511 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0921 22:21:26.597003  276511 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0921 22:21:26.606196  276511 docker.go:188] disabling docker service ...
	I0921 22:21:26.606244  276511 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0921 22:21:26.615407  276511 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0921 22:21:26.623690  276511 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0921 22:21:26.699874  276511 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0921 22:21:26.778612  276511 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0921 22:21:26.787337  276511 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0921 22:21:26.799540  276511 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "registry.k8s.io/pause:3.8"|' -i /etc/containerd/config.toml"
	I0921 22:21:26.807935  276511 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0921 22:21:26.815661  276511 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0921 22:21:26.823769  276511 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0921 22:21:26.831216  276511 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0921 22:21:26.837204  276511 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0921 22:21:26.843235  276511 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0921 22:21:26.913162  276511 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0921 22:21:26.985402  276511 start.go:450] Will wait 60s for socket path /run/containerd/containerd.sock
	I0921 22:21:26.985482  276511 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0921 22:21:26.989229  276511 start.go:471] Will wait 60s for crictl version
	I0921 22:21:26.989292  276511 ssh_runner.go:195] Run: sudo crictl version
	I0921 22:21:27.015951  276511 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-09-21T22:21:27Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0921 22:21:38.063256  276511 ssh_runner.go:195] Run: sudo crictl version
	I0921 22:21:38.087330  276511 start.go:480] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.8
	RuntimeApiVersion:  v1alpha2
	I0921 22:21:38.087394  276511 ssh_runner.go:195] Run: containerd --version
	I0921 22:21:38.117027  276511 ssh_runner.go:195] Run: containerd --version
	I0921 22:21:38.148570  276511 out.go:177] * Preparing Kubernetes v1.25.2 on containerd 1.6.8 ...
	I0921 22:21:38.150093  276511 cli_runner.go:164] Run: docker network inspect no-preload-20220921220832-10174 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 22:21:38.172557  276511 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0921 22:21:38.175833  276511 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0921 22:21:38.185102  276511 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime containerd
	I0921 22:21:38.185143  276511 ssh_runner.go:195] Run: sudo crictl images --output json
	I0921 22:21:38.207088  276511 containerd.go:553] all images are preloaded for containerd runtime.
	I0921 22:21:38.207109  276511 cache_images.go:84] Images are preloaded, skipping loading
	I0921 22:21:38.207180  276511 ssh_runner.go:195] Run: sudo crictl info
	I0921 22:21:38.230239  276511 cni.go:95] Creating CNI manager for ""
	I0921 22:21:38.230269  276511 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0921 22:21:38.230283  276511 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0921 22:21:38.230305  276511 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.25.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-20220921220832-10174 NodeName:no-preload-20220921220832-10174 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/mini
kube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
	I0921 22:21:38.230491  276511 kubeadm.go:161] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "no-preload-20220921220832-10174"
	  kubeletExtraArgs:
	    node-ip: 192.168.94.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0921 22:21:38.230603  276511 kubeadm.go:962] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=no-preload-20220921220832-10174 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.2 ClusterName:no-preload-20220921220832-10174 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0921 22:21:38.230653  276511 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.2
	I0921 22:21:38.237825  276511 binaries.go:44] Found k8s binaries, skipping transfer
	I0921 22:21:38.237881  276511 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0921 22:21:38.244824  276511 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (524 bytes)
	I0921 22:21:38.257993  276511 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0921 22:21:38.270025  276511 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2060 bytes)
	I0921 22:21:38.282061  276511 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0921 22:21:38.285065  276511 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0921 22:21:38.294394  276511 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/no-preload-20220921220832-10174 for IP: 192.168.94.2
	I0921 22:21:38.294515  276511 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.key
	I0921 22:21:38.294555  276511 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/proxy-client-ca.key
	I0921 22:21:38.294619  276511 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/no-preload-20220921220832-10174/client.key
	I0921 22:21:38.294690  276511 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/no-preload-20220921220832-10174/apiserver.key.ad8e880a
	I0921 22:21:38.294731  276511 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/no-preload-20220921220832-10174/proxy-client.key
	I0921 22:21:38.294821  276511 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/10174.pem (1338 bytes)
	W0921 22:21:38.294848  276511 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/10174_empty.pem, impossibly tiny 0 bytes
	I0921 22:21:38.294860  276511 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca-key.pem (1679 bytes)
	I0921 22:21:38.294885  276511 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem (1078 bytes)
	I0921 22:21:38.294912  276511 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/cert.pem (1123 bytes)
	I0921 22:21:38.294934  276511 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/key.pem (1675 bytes)
	I0921 22:21:38.294971  276511 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/101742.pem (1708 bytes)
	I0921 22:21:38.295476  276511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/no-preload-20220921220832-10174/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0921 22:21:38.312346  276511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/no-preload-20220921220832-10174/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0921 22:21:38.328491  276511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/no-preload-20220921220832-10174/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0921 22:21:38.344965  276511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/no-preload-20220921220832-10174/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0921 22:21:38.361363  276511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0921 22:21:38.378193  276511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0921 22:21:38.394663  276511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0921 22:21:38.411219  276511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0921 22:21:38.427455  276511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/10174.pem --> /usr/share/ca-certificates/10174.pem (1338 bytes)
	I0921 22:21:38.443759  276511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/101742.pem --> /usr/share/ca-certificates/101742.pem (1708 bytes)
	I0921 22:21:38.459952  276511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0921 22:21:38.477220  276511 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0921 22:21:38.490029  276511 ssh_runner.go:195] Run: openssl version
	I0921 22:21:38.494865  276511 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10174.pem && ln -fs /usr/share/ca-certificates/10174.pem /etc/ssl/certs/10174.pem"
	I0921 22:21:38.502105  276511 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10174.pem
	I0921 22:21:38.505092  276511 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Sep 21 21:32 /usr/share/ca-certificates/10174.pem
	I0921 22:21:38.505143  276511 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10174.pem
	I0921 22:21:38.510082  276511 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10174.pem /etc/ssl/certs/51391683.0"
	I0921 22:21:38.516779  276511 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/101742.pem && ln -fs /usr/share/ca-certificates/101742.pem /etc/ssl/certs/101742.pem"
	I0921 22:21:38.524387  276511 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/101742.pem
	I0921 22:21:38.527407  276511 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Sep 21 21:32 /usr/share/ca-certificates/101742.pem
	I0921 22:21:38.527449  276511 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/101742.pem
	I0921 22:21:38.532184  276511 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/101742.pem /etc/ssl/certs/3ec20f2e.0"
	I0921 22:21:38.538593  276511 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0921 22:21:38.545959  276511 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0921 22:21:38.548914  276511 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Sep 21 21:28 /usr/share/ca-certificates/minikubeCA.pem
	I0921 22:21:38.548957  276511 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0921 22:21:38.553573  276511 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0921 22:21:38.560211  276511 kubeadm.go:396] StartCluster: {Name:no-preload-20220921220832-10174 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:no-preload-20220921220832-10174 Namespace:default APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 22:21:38.560292  276511 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0921 22:21:38.560329  276511 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0921 22:21:38.584578  276511 cri.go:87] found id: "8c81e8e062ce578d93aaa7742d6ea9313a45d09b7520be04efd4ddd67bd4734b"
	I0921 22:21:38.584604  276511 cri.go:87] found id: "3eefbcb898b092a06a25d126ffaed982346f2b3691953b1cfaddc748fee80843"
	I0921 22:21:38.584611  276511 cri.go:87] found id: "6a4b91f0531d15b584576f58078373b62517efb45934f9088111d572c5de1646"
	I0921 22:21:38.584617  276511 cri.go:87] found id: "a9c3d39d9942f75aca084150dc32f191aed558a4eca281497df9885089a3cacb"
	I0921 22:21:38.584622  276511 cri.go:87] found id: "b69529a7e224f2addaf76a8ffcfd12b9dfc7f85844417e58938c8a2a1dc26be0"
	I0921 22:21:38.584629  276511 cri.go:87] found id: "b1a22ede66e3113827144deb16ed1bddbe2e6b60af54e01396e6daf924a4c409"
	I0921 22:21:38.584635  276511 cri.go:87] found id: ""
	I0921 22:21:38.584680  276511 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0921 22:21:38.597489  276511 cri.go:114] JSON = null
	W0921 22:21:38.597556  276511 kubeadm.go:403] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I0921 22:21:38.597640  276511 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0921 22:21:38.604641  276511 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I0921 22:21:38.604678  276511 kubeadm.go:627] restartCluster start
	I0921 22:21:38.604716  276511 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0921 22:21:38.611273  276511 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:21:38.611984  276511 kubeconfig.go:116] verify returned: extract IP: "no-preload-20220921220832-10174" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
	I0921 22:21:38.612435  276511 kubeconfig.go:127] "no-preload-20220921220832-10174" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig - will repair!
	I0921 22:21:38.613052  276511 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig: {Name:mkdec5faf1bb182b50a3cd5458b352f38075dc15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:21:38.614343  276511 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0921 22:21:38.620864  276511 api_server.go:165] Checking apiserver status ...
	I0921 22:21:38.620917  276511 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:21:38.628681  276511 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:21:38.829072  276511 api_server.go:165] Checking apiserver status ...
	I0921 22:21:38.829161  276511 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:21:38.837312  276511 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:21:39.029609  276511 api_server.go:165] Checking apiserver status ...
	I0921 22:21:39.029716  276511 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:21:39.038394  276511 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:21:39.229726  276511 api_server.go:165] Checking apiserver status ...
	I0921 22:21:39.229799  276511 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:21:39.238375  276511 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:21:39.429768  276511 api_server.go:165] Checking apiserver status ...
	I0921 22:21:39.429867  276511 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:21:39.438213  276511 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:21:39.629500  276511 api_server.go:165] Checking apiserver status ...
	I0921 22:21:39.629592  276511 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:21:39.638208  276511 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:21:39.829520  276511 api_server.go:165] Checking apiserver status ...
	I0921 22:21:39.829665  276511 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:21:39.838208  276511 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:21:40.029479  276511 api_server.go:165] Checking apiserver status ...
	I0921 22:21:40.029573  276511 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:21:40.038635  276511 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:21:40.228885  276511 api_server.go:165] Checking apiserver status ...
	I0921 22:21:40.228956  276511 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:21:40.237569  276511 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:21:40.429785  276511 api_server.go:165] Checking apiserver status ...
	I0921 22:21:40.429859  276511 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:21:40.438642  276511 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:21:40.628883  276511 api_server.go:165] Checking apiserver status ...
	I0921 22:21:40.628958  276511 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:21:40.637446  276511 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:21:40.829709  276511 api_server.go:165] Checking apiserver status ...
	I0921 22:21:40.829789  276511 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:21:40.838273  276511 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:21:41.029560  276511 api_server.go:165] Checking apiserver status ...
	I0921 22:21:41.029638  276511 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:21:41.038065  276511 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:21:41.229380  276511 api_server.go:165] Checking apiserver status ...
	I0921 22:21:41.229482  276511 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:21:41.238040  276511 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:21:41.429329  276511 api_server.go:165] Checking apiserver status ...
	I0921 22:21:41.429408  276511 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:21:41.437964  276511 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:21:41.629268  276511 api_server.go:165] Checking apiserver status ...
	I0921 22:21:41.629339  276511 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:21:41.637793  276511 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:21:41.637813  276511 api_server.go:165] Checking apiserver status ...
	I0921 22:21:41.637849  276511 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:21:41.645663  276511 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:21:41.645692  276511 kubeadm.go:602] needs reconfigure: apiserver error: timed out waiting for the condition
	I0921 22:21:41.645700  276511 kubeadm.go:1114] stopping kube-system containers ...
	I0921 22:21:41.645711  276511 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0921 22:21:41.645761  276511 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0921 22:21:41.669678  276511 cri.go:87] found id: "8c81e8e062ce578d93aaa7742d6ea9313a45d09b7520be04efd4ddd67bd4734b"
	I0921 22:21:41.669709  276511 cri.go:87] found id: "3eefbcb898b092a06a25d126ffaed982346f2b3691953b1cfaddc748fee80843"
	I0921 22:21:41.669719  276511 cri.go:87] found id: "6a4b91f0531d15b584576f58078373b62517efb45934f9088111d572c5de1646"
	I0921 22:21:41.669728  276511 cri.go:87] found id: "a9c3d39d9942f75aca084150dc32f191aed558a4eca281497df9885089a3cacb"
	I0921 22:21:41.669736  276511 cri.go:87] found id: "b69529a7e224f2addaf76a8ffcfd12b9dfc7f85844417e58938c8a2a1dc26be0"
	I0921 22:21:41.669746  276511 cri.go:87] found id: "b1a22ede66e3113827144deb16ed1bddbe2e6b60af54e01396e6daf924a4c409"
	I0921 22:21:41.669758  276511 cri.go:87] found id: ""
	I0921 22:21:41.669765  276511 cri.go:232] Stopping containers: [8c81e8e062ce578d93aaa7742d6ea9313a45d09b7520be04efd4ddd67bd4734b 3eefbcb898b092a06a25d126ffaed982346f2b3691953b1cfaddc748fee80843 6a4b91f0531d15b584576f58078373b62517efb45934f9088111d572c5de1646 a9c3d39d9942f75aca084150dc32f191aed558a4eca281497df9885089a3cacb b69529a7e224f2addaf76a8ffcfd12b9dfc7f85844417e58938c8a2a1dc26be0 b1a22ede66e3113827144deb16ed1bddbe2e6b60af54e01396e6daf924a4c409]
	I0921 22:21:41.669831  276511 ssh_runner.go:195] Run: which crictl
	I0921 22:21:41.672722  276511 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop 8c81e8e062ce578d93aaa7742d6ea9313a45d09b7520be04efd4ddd67bd4734b 3eefbcb898b092a06a25d126ffaed982346f2b3691953b1cfaddc748fee80843 6a4b91f0531d15b584576f58078373b62517efb45934f9088111d572c5de1646 a9c3d39d9942f75aca084150dc32f191aed558a4eca281497df9885089a3cacb b69529a7e224f2addaf76a8ffcfd12b9dfc7f85844417e58938c8a2a1dc26be0 b1a22ede66e3113827144deb16ed1bddbe2e6b60af54e01396e6daf924a4c409
	I0921 22:21:41.698115  276511 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0921 22:21:41.708176  276511 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0921 22:21:41.715094  276511 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Sep 21 22:08 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Sep 21 22:08 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2063 Sep 21 22:08 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Sep 21 22:08 /etc/kubernetes/scheduler.conf
	
	I0921 22:21:41.715152  276511 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0921 22:21:41.721698  276511 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0921 22:21:41.728286  276511 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0921 22:21:41.734815  276511 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:21:41.734874  276511 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0921 22:21:41.741153  276511 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0921 22:21:41.747551  276511 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:21:41.747599  276511 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0921 22:21:41.753773  276511 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0921 22:21:41.760238  276511 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0921 22:21:41.760255  276511 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0921 22:21:41.804588  276511 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0921 22:21:42.356962  276511 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0921 22:21:42.489434  276511 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0921 22:21:42.539390  276511 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0921 22:21:42.683809  276511 api_server.go:51] waiting for apiserver process to appear ...
	I0921 22:21:42.683920  276511 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 22:21:43.194560  276511 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 22:21:43.694761  276511 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 22:21:43.776158  276511 api_server.go:71] duration metric: took 1.092348408s to wait for apiserver process to appear ...
	I0921 22:21:43.776236  276511 api_server.go:87] waiting for apiserver healthz status ...
	I0921 22:21:43.776260  276511 api_server.go:240] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0921 22:21:43.776614  276511 api_server.go:256] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I0921 22:21:44.276913  276511 api_server.go:240] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0921 22:21:46.667105  276511 api_server.go:266] https://192.168.94.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0921 22:21:46.667136  276511 api_server.go:102] status: https://192.168.94.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0921 22:21:46.777448  276511 api_server.go:240] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0921 22:21:46.781780  276511 api_server.go:266] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0921 22:21:46.781806  276511 api_server.go:102] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0921 22:21:47.277400  276511 api_server.go:240] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0921 22:21:47.282106  276511 api_server.go:266] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0921 22:21:47.282133  276511 api_server.go:102] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0921 22:21:47.777302  276511 api_server.go:240] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0921 22:21:47.781834  276511 api_server.go:266] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0921 22:21:47.781871  276511 api_server.go:102] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0921 22:21:48.277407  276511 api_server.go:240] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0921 22:21:48.283340  276511 api_server.go:266] https://192.168.94.2:8443/healthz returned 200:
	ok
	I0921 22:21:48.290556  276511 api_server.go:140] control plane version: v1.25.2
	I0921 22:21:48.290586  276511 api_server.go:130] duration metric: took 4.514332252s to wait for apiserver health ...
	I0921 22:21:48.290599  276511 cni.go:95] Creating CNI manager for ""
	I0921 22:21:48.290609  276511 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0921 22:21:48.293728  276511 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0921 22:21:48.295168  276511 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0921 22:21:48.298937  276511 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.2/kubectl ...
	I0921 22:21:48.298959  276511 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0921 22:21:48.313543  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0921 22:21:49.163078  276511 system_pods.go:43] waiting for kube-system pods to appear ...
	I0921 22:21:49.170085  276511 system_pods.go:59] 9 kube-system pods found
	I0921 22:21:49.170122  276511 system_pods.go:61] "coredns-565d847f94-m8xgt" [67685b7a-28c7-49a1-a4aa-e82aadc5a69b] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0921 22:21:49.170132  276511 system_pods.go:61] "etcd-no-preload-20220921220832-10174" [0fca2788-2ad8-4e18-b8e5-e39cefa36c58] Running
	I0921 22:21:49.170141  276511 system_pods.go:61] "kindnet-27cj5" [90383218-a547-458a-8b5e-af84c9d2b017] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0921 22:21:49.170148  276511 system_pods.go:61] "kube-apiserver-no-preload-20220921220832-10174" [3d9f96c7-a367-41ec-8423-c106fa567853] Running
	I0921 22:21:49.170160  276511 system_pods.go:61] "kube-controller-manager-no-preload-20220921220832-10174" [86ad77b8-aa2b-4d95-a588-48d9493546d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0921 22:21:49.170171  276511 system_pods.go:61] "kube-proxy-nxpf5" [ff6290f8-6cb7-4fae-99a2-7e36bb2e525b] Running
	I0921 22:21:49.170182  276511 system_pods.go:61] "kube-scheduler-no-preload-20220921220832-10174" [9c1e10b4-b7eb-4633-a544-62cbe7ed19d1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0921 22:21:49.170196  276511 system_pods.go:61] "metrics-server-5c8fd5cf8-l82b6" [c17d4483-0758-4a2c-b310-2451393c8fa9] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0921 22:21:49.170208  276511 system_pods.go:61] "storage-provisioner" [51a29d45-5827-48fc-a122-67c7c5c5d190] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0921 22:21:49.170220  276511 system_pods.go:74] duration metric: took 7.119308ms to wait for pod list to return data ...
	I0921 22:21:49.170236  276511 node_conditions.go:102] verifying NodePressure condition ...
	I0921 22:21:49.172624  276511 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0921 22:21:49.172663  276511 node_conditions.go:123] node cpu capacity is 8
	I0921 22:21:49.172674  276511 node_conditions.go:105] duration metric: took 2.43038ms to run NodePressure ...
	I0921 22:21:49.172699  276511 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0921 22:21:49.303995  276511 kubeadm.go:763] waiting for restarted kubelet to initialise ...
	I0921 22:21:49.307574  276511 kubeadm.go:778] kubelet initialised
	I0921 22:21:49.307598  276511 kubeadm.go:779] duration metric: took 3.577635ms waiting for restarted kubelet to initialise ...
	I0921 22:21:49.307604  276511 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0921 22:21:49.312287  276511 pod_ready.go:78] waiting up to 4m0s for pod "coredns-565d847f94-m8xgt" in "kube-system" namespace to be "Ready" ...
	I0921 22:21:51.318183  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:21:53.818525  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:21:56.318234  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:21:58.818354  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:22:01.317924  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:22:03.318485  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:22:05.318662  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:22:07.818323  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:22:10.317822  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:22:12.318364  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:22:14.318738  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:22:16.817958  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:22:18.818500  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:22:21.317777  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:22:23.817867  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:22:26.317609  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:22:28.317733  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:22:30.318197  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:22:32.318313  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:22:34.318420  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:22:36.818347  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:22:39.317465  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:22:41.817507  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:22:43.817568  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:22:45.818320  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:22:48.318077  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:22:50.318393  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:22:52.817366  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:22:54.818323  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:22:57.318147  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:22:59.818131  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:23:01.818325  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:23:04.318178  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:23:06.817937  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:23:08.818155  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:23:10.818493  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:23:13.317568  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:23:15.318068  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:23:17.818331  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:23:20.318055  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:23:22.817832  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:23:24.818318  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:23:27.317384  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:23:29.318499  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:23:31.818328  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:23:34.317921  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:23:36.318549  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:23:38.817288  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:23:40.818024  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:23:43.317555  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:23:45.817730  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:23:48.318608  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:23:50.818073  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:23:52.818642  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:23:55.317928  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:23:57.318050  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:23:59.318317  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:01.818416  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:04.317912  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:06.817672  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:09.317278  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:11.317829  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:13.817410  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:15.817496  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:17.817867  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:19.818111  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:22.317667  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:24.317872  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:26.817684  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:29.317793  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:31.318001  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:33.817371  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:35.817456  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:37.817967  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:40.318244  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:42.817716  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:44.818139  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:47.317825  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:49.318211  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:51.817491  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:53.818080  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:55.818165  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:58.318151  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:00.318254  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:02.817283  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:04.817911  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:07.318317  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:09.817957  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:12.317490  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:14.818433  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:17.317880  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:19.817565  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:22.317640  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:24.318075  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:26.817737  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:28.818270  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:31.318310  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:33.318392  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:35.818247  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:38.317564  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:40.317641  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:42.818080  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:45.317732  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:47.817565  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:49.314620  276511 pod_ready.go:81] duration metric: took 4m0.002300536s waiting for pod "coredns-565d847f94-m8xgt" in "kube-system" namespace to be "Ready" ...
	E0921 22:25:49.314670  276511 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-565d847f94-m8xgt" in "kube-system" namespace to be "Ready" (will not retry!)
	I0921 22:25:49.314692  276511 pod_ready.go:38] duration metric: took 4m0.007078344s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0921 22:25:49.314717  276511 kubeadm.go:631] restartCluster took 4m10.710033944s
	W0921 22:25:49.314858  276511 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0921 22:25:49.314887  276511 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0921 22:25:52.154431  276511 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (2.839517184s)
	I0921 22:25:52.154487  276511 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0921 22:25:52.163969  276511 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0921 22:25:52.170969  276511 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0921 22:25:52.171027  276511 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0921 22:25:52.177996  276511 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0921 22:25:52.178063  276511 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0921 22:25:52.213969  276511 kubeadm.go:317] W0921 22:25:52.213140    3321 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0921 22:25:52.246713  276511 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1017-gcp\n", err: exit status 1
	I0921 22:25:52.310910  276511 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0921 22:26:01.184243  276511 kubeadm.go:317] [init] Using Kubernetes version: v1.25.2
	I0921 22:26:01.184314  276511 kubeadm.go:317] [preflight] Running pre-flight checks
	I0921 22:26:01.184416  276511 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I0921 22:26:01.184507  276511 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1017-gcp
	I0921 22:26:01.184592  276511 kubeadm.go:317] OS: Linux
	I0921 22:26:01.184673  276511 kubeadm.go:317] CGROUPS_CPU: enabled
	I0921 22:26:01.184737  276511 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I0921 22:26:01.184793  276511 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I0921 22:26:01.184856  276511 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I0921 22:26:01.184921  276511 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I0921 22:26:01.184985  276511 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I0921 22:26:01.185046  276511 kubeadm.go:317] CGROUPS_PIDS: enabled
	I0921 22:26:01.185099  276511 kubeadm.go:317] CGROUPS_HUGETLB: enabled
	I0921 22:26:01.185157  276511 kubeadm.go:317] CGROUPS_BLKIO: enabled
	I0921 22:26:01.185254  276511 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0921 22:26:01.185380  276511 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0921 22:26:01.185526  276511 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0921 22:26:01.185623  276511 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0921 22:26:01.187463  276511 out.go:204]   - Generating certificates and keys ...
	I0921 22:26:01.187540  276511 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0921 22:26:01.187594  276511 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0921 22:26:01.187659  276511 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0921 22:26:01.187785  276511 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I0921 22:26:01.187900  276511 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I0921 22:26:01.187958  276511 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I0921 22:26:01.188014  276511 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I0921 22:26:01.188086  276511 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I0921 22:26:01.188221  276511 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0921 22:26:01.188336  276511 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0921 22:26:01.188409  276511 kubeadm.go:317] [certs] Using the existing "sa" key
	I0921 22:26:01.188488  276511 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0921 22:26:01.188556  276511 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0921 22:26:01.188636  276511 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0921 22:26:01.188731  276511 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0921 22:26:01.188817  276511 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0921 22:26:01.188953  276511 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0921 22:26:01.189087  276511 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0921 22:26:01.189191  276511 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I0921 22:26:01.189310  276511 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0921 22:26:01.191284  276511 out.go:204]   - Booting up control plane ...
	I0921 22:26:01.191385  276511 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0921 22:26:01.191486  276511 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0921 22:26:01.191561  276511 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0921 22:26:01.191748  276511 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0921 22:26:01.191985  276511 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0921 22:26:01.192105  276511 kubeadm.go:317] [apiclient] All control plane components are healthy after 6.503275 seconds
	I0921 22:26:01.192289  276511 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0921 22:26:01.192460  276511 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0921 22:26:01.192545  276511 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs
	I0921 22:26:01.192839  276511 kubeadm.go:317] [mark-control-plane] Marking the node no-preload-20220921220832-10174 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0921 22:26:01.192906  276511 kubeadm.go:317] [bootstrap-token] Using token: 9ldpwz.b05pw96cyce3l1nr
	I0921 22:26:01.194593  276511 out.go:204]   - Configuring RBAC rules ...
	I0921 22:26:01.194724  276511 kubeadm.go:317] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0921 22:26:01.194852  276511 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0921 22:26:01.195058  276511 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0921 22:26:01.195234  276511 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0921 22:26:01.195387  276511 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0921 22:26:01.195500  276511 kubeadm.go:317] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0921 22:26:01.195644  276511 kubeadm.go:317] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0921 22:26:01.195703  276511 kubeadm.go:317] [addons] Applied essential addon: CoreDNS
	I0921 22:26:01.195765  276511 kubeadm.go:317] [addons] Applied essential addon: kube-proxy
	I0921 22:26:01.195777  276511 kubeadm.go:317] 
	I0921 22:26:01.195861  276511 kubeadm.go:317] Your Kubernetes control-plane has initialized successfully!
	I0921 22:26:01.195872  276511 kubeadm.go:317] 
	I0921 22:26:01.195980  276511 kubeadm.go:317] To start using your cluster, you need to run the following as a regular user:
	I0921 22:26:01.196004  276511 kubeadm.go:317] 
	I0921 22:26:01.196036  276511 kubeadm.go:317]   mkdir -p $HOME/.kube
	I0921 22:26:01.196117  276511 kubeadm.go:317]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0921 22:26:01.196194  276511 kubeadm.go:317]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0921 22:26:01.196207  276511 kubeadm.go:317] 
	I0921 22:26:01.196286  276511 kubeadm.go:317] Alternatively, if you are the root user, you can run:
	I0921 22:26:01.196303  276511 kubeadm.go:317] 
	I0921 22:26:01.196379  276511 kubeadm.go:317]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0921 22:26:01.196404  276511 kubeadm.go:317] 
	I0921 22:26:01.196485  276511 kubeadm.go:317] You should now deploy a pod network to the cluster.
	I0921 22:26:01.196595  276511 kubeadm.go:317] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0921 22:26:01.196694  276511 kubeadm.go:317]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0921 22:26:01.196706  276511 kubeadm.go:317] 
	I0921 22:26:01.196820  276511 kubeadm.go:317] You can now join any number of control-plane nodes by copying certificate authorities
	I0921 22:26:01.196920  276511 kubeadm.go:317] and service account keys on each node and then running the following as root:
	I0921 22:26:01.196931  276511 kubeadm.go:317] 
	I0921 22:26:01.197032  276511 kubeadm.go:317]   kubeadm join control-plane.minikube.internal:8443 --token 9ldpwz.b05pw96cyce3l1nr \
	I0921 22:26:01.197181  276511 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:b419e756666ce2e30bdac31039f21ad7242d26e2b9c03a55c5bc6fe8dcfddcf7 \
	I0921 22:26:01.197220  276511 kubeadm.go:317] 	--control-plane 
	I0921 22:26:01.197231  276511 kubeadm.go:317] 
	I0921 22:26:01.197362  276511 kubeadm.go:317] Then you can join any number of worker nodes by running the following on each as root:
	I0921 22:26:01.197381  276511 kubeadm.go:317] 
	I0921 22:26:01.197495  276511 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8443 --token 9ldpwz.b05pw96cyce3l1nr \
	I0921 22:26:01.197628  276511 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:b419e756666ce2e30bdac31039f21ad7242d26e2b9c03a55c5bc6fe8dcfddcf7 
	I0921 22:26:01.197660  276511 cni.go:95] Creating CNI manager for ""
	I0921 22:26:01.197674  276511 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0921 22:26:01.199797  276511 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0921 22:26:01.201405  276511 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0921 22:26:01.205181  276511 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.2/kubectl ...
	I0921 22:26:01.205199  276511 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0921 22:26:01.218971  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0921 22:26:02.006490  276511 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0921 22:26:02.006560  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:02.006575  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl label nodes minikube.k8s.io/version=v1.27.0 minikube.k8s.io/commit=937c68716dfaac5b5ffa3b6655158d5d3472b8c4 minikube.k8s.io/name=no-preload-20220921220832-10174 minikube.k8s.io/updated_at=2022_09_21T22_26_02_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:02.013858  276511 ops.go:34] apiserver oom_adj: -16
	I0921 22:26:02.099832  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:02.694112  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:03.194089  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:03.693535  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:04.193854  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:04.693713  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:05.194101  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:05.694288  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:06.193619  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:06.693501  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:07.193590  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:07.693901  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:08.194072  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:08.694197  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:09.193914  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:09.693488  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:10.194416  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:10.693496  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:11.194435  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:11.694097  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:12.194461  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:12.694279  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:13.193818  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:13.693711  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:13.758985  276511 kubeadm.go:1067] duration metric: took 11.752476269s to wait for elevateKubeSystemPrivileges.
	I0921 22:26:13.759013  276511 kubeadm.go:398] StartCluster complete in 4m35.198807914s
	I0921 22:26:13.759030  276511 settings.go:142] acquiring lock: {Name:mk2e017b0c75e33bad2b3a546d00214b38a21694 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:26:13.759144  276511 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
	I0921 22:26:13.760661  276511 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig: {Name:mkdec5faf1bb182b50a3cd5458b352f38075dc15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:26:14.276964  276511 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "no-preload-20220921220832-10174" rescaled to 1
	I0921 22:26:14.277021  276511 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0921 22:26:14.277060  276511 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0921 22:26:14.279846  276511 out.go:177] * Verifying Kubernetes components...
	I0921 22:26:14.277154  276511 addons.go:412] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0921 22:26:14.277306  276511 config.go:180] Loaded profile config "no-preload-20220921220832-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.2
	I0921 22:26:14.281313  276511 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0921 22:26:14.281349  276511 addons.go:65] Setting default-storageclass=true in profile "no-preload-20220921220832-10174"
	I0921 22:26:14.281359  276511 addons.go:65] Setting storage-provisioner=true in profile "no-preload-20220921220832-10174"
	I0921 22:26:14.281373  276511 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-20220921220832-10174"
	I0921 22:26:14.281387  276511 addons.go:65] Setting metrics-server=true in profile "no-preload-20220921220832-10174"
	I0921 22:26:14.281397  276511 addons.go:65] Setting dashboard=true in profile "no-preload-20220921220832-10174"
	I0921 22:26:14.281436  276511 addons.go:153] Setting addon dashboard=true in "no-preload-20220921220832-10174"
	W0921 22:26:14.281450  276511 addons.go:162] addon dashboard should already be in state true
	I0921 22:26:14.281497  276511 host.go:66] Checking if "no-preload-20220921220832-10174" exists ...
	I0921 22:26:14.281400  276511 addons.go:153] Setting addon metrics-server=true in "no-preload-20220921220832-10174"
	W0921 22:26:14.281576  276511 addons.go:162] addon metrics-server should already be in state true
	I0921 22:26:14.281377  276511 addons.go:153] Setting addon storage-provisioner=true in "no-preload-20220921220832-10174"
	I0921 22:26:14.281640  276511 host.go:66] Checking if "no-preload-20220921220832-10174" exists ...
	W0921 22:26:14.281653  276511 addons.go:162] addon storage-provisioner should already be in state true
	I0921 22:26:14.281684  276511 host.go:66] Checking if "no-preload-20220921220832-10174" exists ...
	I0921 22:26:14.281727  276511 cli_runner.go:164] Run: docker container inspect no-preload-20220921220832-10174 --format={{.State.Status}}
	I0921 22:26:14.282004  276511 cli_runner.go:164] Run: docker container inspect no-preload-20220921220832-10174 --format={{.State.Status}}
	I0921 22:26:14.282138  276511 cli_runner.go:164] Run: docker container inspect no-preload-20220921220832-10174 --format={{.State.Status}}
	I0921 22:26:14.282139  276511 cli_runner.go:164] Run: docker container inspect no-preload-20220921220832-10174 --format={{.State.Status}}
	I0921 22:26:14.321366  276511 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0921 22:26:14.323218  276511 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0921 22:26:14.323243  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0921 22:26:14.323321  276511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220832-10174
	I0921 22:26:14.323433  276511 addons.go:153] Setting addon default-storageclass=true in "no-preload-20220921220832-10174"
	W0921 22:26:14.323452  276511 addons.go:162] addon default-storageclass should already be in state true
	I0921 22:26:14.323478  276511 host.go:66] Checking if "no-preload-20220921220832-10174" exists ...
	I0921 22:26:14.323995  276511 cli_runner.go:164] Run: docker container inspect no-preload-20220921220832-10174 --format={{.State.Status}}
	I0921 22:26:14.331074  276511 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.6.0
	I0921 22:26:14.333243  276511 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0921 22:26:14.335670  276511 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0921 22:26:14.335699  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0921 22:26:14.335828  276511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220832-10174
	I0921 22:26:14.338700  276511 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0921 22:26:14.339971  276511 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0921 22:26:14.339996  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0921 22:26:14.340067  276511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220832-10174
	I0921 22:26:14.357088  276511 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0921 22:26:14.357118  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0921 22:26:14.357179  276511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220832-10174
	I0921 22:26:14.363845  276511 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49438 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/no-preload-20220921220832-10174/id_rsa Username:docker}
	I0921 22:26:14.373248  276511 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49438 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/no-preload-20220921220832-10174/id_rsa Username:docker}
	I0921 22:26:14.374001  276511 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49438 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/no-preload-20220921220832-10174/id_rsa Username:docker}
	I0921 22:26:14.403584  276511 node_ready.go:35] waiting up to 6m0s for node "no-preload-20220921220832-10174" to be "Ready" ...
	I0921 22:26:14.403673  276511 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0921 22:26:14.403710  276511 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49438 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/no-preload-20220921220832-10174/id_rsa Username:docker}
	I0921 22:26:14.597706  276511 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0921 22:26:14.597740  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0921 22:26:14.598185  276511 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0921 22:26:14.598208  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0921 22:26:14.678717  276511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0921 22:26:14.691157  276511 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0921 22:26:14.691190  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0921 22:26:14.776824  276511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0921 22:26:14.780103  276511 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0921 22:26:14.780131  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0921 22:26:14.796772  276511 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0921 22:26:14.796802  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0921 22:26:14.877240  276511 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0921 22:26:14.877270  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0921 22:26:14.886529  276511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0921 22:26:14.982072  276511 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0921 22:26:14.982106  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4206 bytes)
	I0921 22:26:15.083042  276511 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0921 22:26:15.083073  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0921 22:26:15.185025  276511 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0921 22:26:15.185058  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0921 22:26:15.288358  276511 start.go:810] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS
	I0921 22:26:15.295798  276511 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0921 22:26:15.295830  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0921 22:26:15.390667  276511 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0921 22:26:15.390693  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0921 22:26:15.415462  276511 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0921 22:26:15.415496  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0921 22:26:15.492343  276511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0921 22:26:15.887638  276511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.208874575s)
	I0921 22:26:15.887703  276511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.110843194s)
	I0921 22:26:15.982100  276511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.095511944s)
	I0921 22:26:15.982142  276511 addons.go:383] Verifying addon metrics-server=true in "no-preload-20220921220832-10174"
	I0921 22:26:16.410487  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:16.706261  276511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.213866962s)
	I0921 22:26:16.708800  276511 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0921 22:26:16.709899  276511 addons.go:414] enableAddons completed in 2.432760887s
	I0921 22:26:18.911099  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:21.409684  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:23.410505  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:25.909606  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:27.910578  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:30.410429  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:32.910296  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:34.911081  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:37.410360  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:39.410436  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:41.909862  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:43.910310  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:46.409644  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:48.410566  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:50.410732  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:52.910395  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:54.910495  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:57.409907  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:59.410288  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:01.910153  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:04.409817  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:06.410562  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:08.910302  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:11.410571  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:13.909964  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:15.910369  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:18.410585  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:20.910125  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:22.910441  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:25.410069  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:27.410438  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:29.410512  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:31.910290  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:34.409802  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:36.909982  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:39.409679  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:41.410245  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:43.909863  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:45.910696  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:48.410147  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:50.410237  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:52.910535  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:55.410601  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:57.910322  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:59.910846  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:02.410370  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:04.410513  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:06.910328  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:09.409926  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:11.410618  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:13.909830  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:15.910746  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:18.409773  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:20.410208  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:22.410702  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:24.909931  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:26.910183  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:28.910722  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:31.410255  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:33.410335  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:35.410708  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:37.910661  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:40.410644  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:42.910093  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:44.910463  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:47.410019  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:49.410428  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:51.410461  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:53.909716  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:55.910611  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:58.409630  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:00.410447  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:02.910338  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:05.410066  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:07.910279  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:10.410127  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:12.910452  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:15.410553  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:17.910479  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:20.410559  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:22.909898  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:24.910567  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:27.410039  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:29.410131  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:31.410291  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:33.910459  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:36.410445  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:38.910532  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:41.409671  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:43.410422  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:45.910511  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:48.409951  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:50.410363  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:52.411261  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:54.910318  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:57.409683  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:59.410335  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:30:01.909994  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:30:03.910230  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:30:06.410036  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:30:08.909550  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:30:10.910475  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:30:13.409889  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:30:14.413229  276511 node_ready.go:38] duration metric: took 4m0.009606009s waiting for node "no-preload-20220921220832-10174" to be "Ready" ...
	I0921 22:30:14.416209  276511 out.go:177] 
	W0921 22:30:14.417896  276511 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0921 22:30:14.417916  276511 out.go:239] * 
	* 
	W0921 22:30:14.418711  276511 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0921 22:30:14.420798  276511 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p no-preload-20220921220832-10174 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220921220832-10174
helpers_test.go:235: (dbg) docker inspect no-preload-20220921220832-10174:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d6359e799a3f8bf5c7871e4f7257a82e60fa936a5d415f8dcf227027153b841e",
	        "Created": "2022-09-21T22:08:33.259074855Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 276819,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-09-21T22:21:22.389970999Z",
	            "FinishedAt": "2022-09-21T22:21:20.752642361Z"
	        },
	        "Image": "sha256:5f58fddaff4349397c9f51a6b73926a9b118af22b4ccb4492e84c74d0b59dcd4",
	        "ResolvConfPath": "/var/lib/docker/containers/d6359e799a3f8bf5c7871e4f7257a82e60fa936a5d415f8dcf227027153b841e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d6359e799a3f8bf5c7871e4f7257a82e60fa936a5d415f8dcf227027153b841e/hostname",
	        "HostsPath": "/var/lib/docker/containers/d6359e799a3f8bf5c7871e4f7257a82e60fa936a5d415f8dcf227027153b841e/hosts",
	        "LogPath": "/var/lib/docker/containers/d6359e799a3f8bf5c7871e4f7257a82e60fa936a5d415f8dcf227027153b841e/d6359e799a3f8bf5c7871e4f7257a82e60fa936a5d415f8dcf227027153b841e-json.log",
	        "Name": "/no-preload-20220921220832-10174",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-20220921220832-10174:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-20220921220832-10174",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/0d97efec5bb72252d948800969bf3c9c07c6335302b779ac376f906a108bc7bf-init/diff:/var/lib/docker/overlay2/e464f9a636cb97832b8aa5685163afb26b14d037782877c3831e0f261b6fb12b/diff:/var/lib/docker/overlay2/6f2f230003f0711dfcff3390931e14f9abd39be1cd2b674079cb6a0fc99dab7f/diff:/var/lib/docker/overlay2/a539506727a1471a83b57694b6923a137699d9d47fe00601ffa27c34d63d17c2/diff:/var/lib/docker/overlay2/7dc48fda616494e9959c05191521a1f5bdf6be0555a35fc1d7ce85b87a0522ad/diff:/var/lib/docker/overlay2/6a1c06da85f8c03c9712693ca8a75d804e7b9f135a6b92e3929385cc791b0b3a/diff:/var/lib/docker/overlay2/d178ee6cf0c027c7943422dc71b38eb93ee131daede423f6cf283424eaebc517/diff:/var/lib/docker/overlay2/f4c8d81a89934a29db99d9ed4b4542a8882465ab2b6419ba4a3fe09214d25d9e/diff:/var/lib/docker/overlay2/4411575e37772974371f716b7c4abb3053138ddcaf53d883eb3116b4c3a37f78/diff:/var/lib/docker/overlay2/a3f8db0c0d790efcede2f41e470d293e280b022ff0bbb46c8bc25f190e3146fc/diff:/var/lib/docker/overlay2/a5bfd3
703378d6d5b2622598a13b4b2eb9203659c1ffff9aa93944ef8d20d22c/diff:/var/lib/docker/overlay2/3c24c9c8a37c1a488d12209df991b3857ccbc448a3329e46a8d48fc28ef81b83/diff:/var/lib/docker/overlay2/199771963b33bc809e912a6d428c556897f0ba04db2268e08695cb7ce0ee3bad/diff:/var/lib/docker/overlay2/4bc13570e456a5d261fbaf68140f2e5b2ea10c6683623858e16ee3ed1b117ef7/diff:/var/lib/docker/overlay2/9831fdfd44eff7e3f4780d10f5480f090b388735bb188e7a1e93251b7769d8f3/diff:/var/lib/docker/overlay2/28126fe31ddfbf99d4588c5ac750503b9b6cee8f2e00781941cff5f4323d1c76/diff:/var/lib/docker/overlay2/173221e8ff5f6ec22870429dc8bb01272ade638fe47275d1daa1ce07952c8c8c/diff:/var/lib/docker/overlay2/53bc2e4fa25c7c694765cf4d57e59552b26eed732ba59b8d74b221ea0ada7479/diff:/var/lib/docker/overlay2/0175a1cb7e1c92247b6cbac270da49d811d3508e193c1c2e47bef8cb769a47bf/diff:/var/lib/docker/overlay2/1651afeb98cf83af553f3e0a4dcb564af84cb440147702fddc0133cdae08e879/diff:/var/lib/docker/overlay2/64c0563be8f8902fe0e9d207f839be98da0e23d6d839403a7e5498f0468e6b75/diff:/var/lib/d
ocker/overlay2/c9ce1f7c22e681e0e279ce7656144b4c75e2d24f7297acad69e621f2756b75e8/diff:/var/lib/docker/overlay2/469e6b8bbc078383af3dc64c254906774ab56a833c4de1dd39b46bfa27fee3b0/diff:/var/lib/docker/overlay2/797536b923ada4261904843a51188063c401089eae5fec971f1dfb3c21d000a8/diff:/var/lib/docker/overlay2/9d5cbab97d75347795fc5814806362514e12ffc998b48411ecf1d23f980badf6/diff:/var/lib/docker/overlay2/8681f41e3df379cfdc8fd52b2e5837512b8e36a64c87fb159548a876d16e691d/diff:/var/lib/docker/overlay2/78ea1917a0aca968f65d796cbc2ee95d7d190a700496b9e47b29884fe13b1bec/diff:/var/lib/docker/overlay2/c3c231a74b7992f1fdc04b623c10d7791eab4fd97c803ed2610d595097c682c2/diff:/var/lib/docker/overlay2/f7b38a2a87d3958318f3a4b244e33d8096cd001a8fcb916eec022b0679382900/diff:/var/lib/docker/overlay2/1107be3397c1dfc460decaf307982d427cc431beb7938fb0e78f4f7cbc0afd3b/diff:/var/lib/docker/overlay2/59319d0c4cc381deff6e51ba1cc718b7cfd4fc557e74b8e83bd228afca501c8c/diff:/var/lib/docker/overlay2/9d031408a1ab9825246b7bba81db2596cb92e70fbedc70fba9be1d25320
8529f/diff:/var/lib/docker/overlay2/5800b717c814431baf3cc17520b099e7e011915cdeb46ed6360ec2f831a926ec/diff:/var/lib/docker/overlay2/6660d8dfc6dbb7f41f871ff7ec7e0fdfdb2ca3480141cdd4cce92869cb5382bf/diff:/var/lib/docker/overlay2/525a566b79110ff18e78880f2652b8d9cfcfbdbf3272fedb650933fcbbf869bd/diff:/var/lib/docker/overlay2/a0730aa8e0952fc661048d3a7f986ff246a5b2a29ad4e8195970bb407e3cecfc/diff:/var/lib/docker/overlay2/f9d25765ae914f5f013b5f46292f03c39d82f675d2102904f5fabecf07189337/diff:/var/lib/docker/overlay2/b7b34d126964ff60bc0fb625da7343e53e8bb4e77d5e72e33257c28b3d395ca3/diff:/var/lib/docker/overlay2/8514d6cc3775ebdd86f683a78503419bc97346c0059ac0d48563d2edec9727f0/diff:/var/lib/docker/overlay2/dd6f7bead5718d42040b4ee90ce09fa15720d04f3c58e2964b1d5b241bf5b7ed/diff:/var/lib/docker/overlay2/1a8adeb0355a42e9284e241b527930f4f6b7108d459c10550d5d0c4451f9e924/diff:/var/lib/docker/overlay2/703991449e0070d93945e4435da56144ce54e461e877e0b084144663d46b10f6/diff:/var/lib/docker/overlay2/65855711cfa445d374537e6268fd1bea2f62ea
bf2f590842933254b4755050dd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0d97efec5bb72252d948800969bf3c9c07c6335302b779ac376f906a108bc7bf/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0d97efec5bb72252d948800969bf3c9c07c6335302b779ac376f906a108bc7bf/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0d97efec5bb72252d948800969bf3c9c07c6335302b779ac376f906a108bc7bf/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-20220921220832-10174",
	                "Source": "/var/lib/docker/volumes/no-preload-20220921220832-10174/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-20220921220832-10174",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-20220921220832-10174",
	                "name.minikube.sigs.k8s.io": "no-preload-20220921220832-10174",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "80be6817ec09ec1e98145a8a646af11f4f74d4ba59d85211dcfab6cba5a3401d",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49438"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49437"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49434"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49436"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49435"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/80be6817ec09",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-20220921220832-10174": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "d6359e799a3f",
	                        "no-preload-20220921220832-10174"
	                    ],
	                    "NetworkID": "40cb175bb75cdb2ff8ee942229fbc7e22e0ed7651da5bae77cd3dd1e2f70c5e3",
	                    "EndpointID": "e7b2dfb5c43b9948e24c210d676d20bdba88c008cdb5f205fd56c5ca5e54225a",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:5e:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220921220832-10174 -n no-preload-20220921220832-10174
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-20220921220832-10174 logs -n 25
helpers_test.go:252: TestStartStop/group/no-preload/serial/SecondStart logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| Command |                            Args                            |                     Profile                     |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| unpause | -p                                                         | old-k8s-version-20220921220722-10174            | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | old-k8s-version-20220921220722-10174                       |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | old-k8s-version-20220921220722-10174            | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | old-k8s-version-20220921220722-10174                       |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | old-k8s-version-20220921220722-10174            | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | old-k8s-version-20220921220722-10174                       |                                                 |         |         |                     |                     |
	| start   | -p newest-cni-20220921221720-10174 --memory=2200           | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.2                               |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | embed-certs-20220921220439-10174                | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | embed-certs-20220921220439-10174                           |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                     |                     |
	| stop    | -p                                                         | embed-certs-20220921220439-10174                | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | embed-certs-20220921220439-10174                           |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | embed-certs-20220921220439-10174                | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | embed-certs-20220921220439-10174                           |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                     |                     |
	| start   | -p                                                         | embed-certs-20220921220439-10174                | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC |                     |
	|         | embed-certs-20220921220439-10174                           |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                     |                     |
	|         | --wait=true --embed-certs                                  |                                                 |         |         |                     |                     |
	|         | --driver=docker                                            |                                                 |         |         |                     |                     |
	|         | --container-runtime=containerd                             |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.2                               |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | newest-cni-20220921221720-10174                            |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                     |                     |
	| stop    | -p                                                         | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | newest-cni-20220921221720-10174                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | newest-cni-20220921221720-10174                            |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                     |                     |
	| start   | -p newest-cni-20220921221720-10174 --memory=2200           | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:18 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.2                               |                                                 |         |         |                     |                     |
	| ssh     | -p                                                         | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:18 UTC | 21 Sep 22 22:18 UTC |
	|         | newest-cni-20220921221720-10174                            |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                     |                     |
	| pause   | -p                                                         | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:18 UTC | 21 Sep 22 22:18 UTC |
	|         | newest-cni-20220921221720-10174                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| unpause | -p                                                         | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:18 UTC | 21 Sep 22 22:18 UTC |
	|         | newest-cni-20220921221720-10174                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:18 UTC | 21 Sep 22 22:18 UTC |
	|         | newest-cni-20220921221720-10174                            |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:18 UTC | 21 Sep 22 22:18 UTC |
	|         | newest-cni-20220921221720-10174                            |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | no-preload-20220921220832-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:21 UTC | 21 Sep 22 22:21 UTC |
	|         | no-preload-20220921220832-10174                            |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                     |                     |
	| stop    | -p                                                         | no-preload-20220921220832-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:21 UTC | 21 Sep 22 22:21 UTC |
	|         | no-preload-20220921220832-10174                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | no-preload-20220921220832-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:21 UTC | 21 Sep 22 22:21 UTC |
	|         | no-preload-20220921220832-10174                            |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                     |                     |
	| start   | -p                                                         | no-preload-20220921220832-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:21 UTC |                     |
	|         | no-preload-20220921220832-10174                            |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                     |                     |
	|         | --wait=true --preload=false                                |                                                 |         |         |                     |                     |
	|         | --driver=docker                                            |                                                 |         |         |                     |                     |
	|         | --container-runtime=containerd                             |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.2                               |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | default-k8s-different-port-20220921221118-10174 | jenkins | v1.27.0 | 21 Sep 22 22:23 UTC | 21 Sep 22 22:23 UTC |
	|         | default-k8s-different-port-20220921221118-10174            |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                     |                     |
	| stop    | -p                                                         | default-k8s-different-port-20220921221118-10174 | jenkins | v1.27.0 | 21 Sep 22 22:23 UTC | 21 Sep 22 22:24 UTC |
	|         | default-k8s-different-port-20220921221118-10174            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | default-k8s-different-port-20220921221118-10174 | jenkins | v1.27.0 | 21 Sep 22 22:24 UTC | 21 Sep 22 22:24 UTC |
	|         | default-k8s-different-port-20220921221118-10174            |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                     |                     |
	| start   | -p                                                         | default-k8s-different-port-20220921221118-10174 | jenkins | v1.27.0 | 21 Sep 22 22:24 UTC |                     |
	|         | default-k8s-different-port-20220921221118-10174            |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true                |                                                 |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker                      |                                                 |         |         |                     |                     |
	|         |  --container-runtime=containerd                            |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.2                               |                                                 |         |         |                     |                     |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/09/21 22:24:01
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.19.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0921 22:24:01.692796  283599 out.go:296] Setting OutFile to fd 1 ...
	I0921 22:24:01.693211  283599 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:24:01.693232  283599 out.go:309] Setting ErrFile to fd 2...
	I0921 22:24:01.693240  283599 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:24:01.693504  283599 root.go:333] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/bin
	I0921 22:24:01.694665  283599 out.go:303] Setting JSON to false
	I0921 22:24:01.696140  283599 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3993,"bootTime":1663795049,"procs":467,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1017-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0921 22:24:01.696247  283599 start.go:125] virtualization: kvm guest
	I0921 22:24:01.698874  283599 out.go:177] * [default-k8s-different-port-20220921221118-10174] minikube v1.27.0 on Ubuntu 20.04 (kvm/amd64)
	I0921 22:24:01.701214  283599 out.go:177]   - MINIKUBE_LOCATION=14995
	I0921 22:24:01.701128  283599 notify.go:214] Checking for updates...
	I0921 22:24:01.703092  283599 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0921 22:24:01.704791  283599 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
	I0921 22:24:01.706544  283599 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube
	I0921 22:24:01.708172  283599 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0921 22:23:57.318050  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:23:59.318317  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:01.710349  283599 config.go:180] Loaded profile config "default-k8s-different-port-20220921221118-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.2
	I0921 22:24:01.710930  283599 driver.go:365] Setting default libvirt URI to qemu:///system
	I0921 22:24:01.744026  283599 docker.go:137] docker version: linux-20.10.18
	I0921 22:24:01.744136  283599 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:24:01.840732  283599 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:true NGoroutines:44 SystemTime:2022-09-21 22:24:01.764457724 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1017-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.18 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0921 22:24:01.840851  283599 docker.go:254] overlay module found
	I0921 22:24:01.843051  283599 out.go:177] * Using the docker driver based on existing profile
	I0921 22:24:01.844347  283599 start.go:284] selected driver: docker
	I0921 22:24:01.844371  283599 start.go:808] validating driver "docker" against &{Name:default-k8s-different-port-20220921221118-10174 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:default-k8s-different-port-20220921221118-10174 Na
mespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountSt
ring:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 22:24:01.844475  283599 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0921 22:24:01.845300  283599 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:24:01.940944  283599 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:true NGoroutines:44 SystemTime:2022-09-21 22:24:01.86716064 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1017-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.18 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientI
nfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0921 22:24:01.941199  283599 start_flags.go:867] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0921 22:24:01.941223  283599 cni.go:95] Creating CNI manager for ""
	I0921 22:24:01.941231  283599 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0921 22:24:01.941249  283599 start_flags.go:316] config:
	{Name:default-k8s-different-port-20220921221118-10174 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:default-k8s-different-port-20220921221118-10174 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDo
main:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 22:24:01.944240  283599 out.go:177] * Starting control plane node default-k8s-different-port-20220921221118-10174 in cluster default-k8s-different-port-20220921221118-10174
	I0921 22:24:01.945596  283599 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0921 22:24:01.946905  283599 out.go:177] * Pulling base image ...
	I0921 22:24:01.948255  283599 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime containerd
	I0921 22:24:01.948306  283599 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.2-containerd-overlay2-amd64.tar.lz4
	I0921 22:24:01.948321  283599 cache.go:57] Caching tarball of preloaded images
	I0921 22:24:01.948361  283599 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	I0921 22:24:01.948572  283599 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0921 22:24:01.948588  283599 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.2 on containerd
	I0921 22:24:01.948702  283599 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/config.json ...
	I0921 22:24:01.976413  283599 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon, skipping pull
	I0921 22:24:01.976445  283599 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c exists in daemon, skipping load
	I0921 22:24:01.976457  283599 cache.go:208] Successfully downloaded all kic artifacts
	I0921 22:24:01.976502  283599 start.go:364] acquiring machines lock for default-k8s-different-port-20220921221118-10174: {Name:mk6a2906d520bc1db61074ef435cf249d094e940 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:24:01.976622  283599 start.go:368] acquired machines lock for "default-k8s-different-port-20220921221118-10174" in 78.111µs
	I0921 22:24:01.976652  283599 start.go:96] Skipping create...Using existing machine configuration
	I0921 22:24:01.976660  283599 fix.go:55] fixHost starting: 
	I0921 22:24:01.976899  283599 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221118-10174 --format={{.State.Status}}
	I0921 22:24:02.002084  283599 fix.go:103] recreateIfNeeded on default-k8s-different-port-20220921221118-10174: state=Stopped err=<nil>
	W0921 22:24:02.002122  283599 fix.go:129] unexpected machine state, will restart: <nil>
	I0921 22:24:02.004632  283599 out.go:177] * Restarting existing docker container for "default-k8s-different-port-20220921221118-10174" ...
	I0921 22:24:00.289698  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:02.790230  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:02.006307  283599 cli_runner.go:164] Run: docker start default-k8s-different-port-20220921221118-10174
	I0921 22:24:02.358108  283599 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221118-10174 --format={{.State.Status}}
	I0921 22:24:02.385298  283599 kic.go:415] container "default-k8s-different-port-20220921221118-10174" state is running.
	I0921 22:24:02.385684  283599 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220921221118-10174
	I0921 22:24:02.412757  283599 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/config.json ...
	I0921 22:24:02.412997  283599 machine.go:88] provisioning docker machine ...
	I0921 22:24:02.413031  283599 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20220921221118-10174"
	I0921 22:24:02.413108  283599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:24:02.438229  283599 main.go:134] libmachine: Using SSH client type: native
	I0921 22:24:02.438400  283599 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ec520] 0x7ef6a0 <nil>  [] 0s} 127.0.0.1 49443 <nil> <nil>}
	I0921 22:24:02.438416  283599 main.go:134] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20220921221118-10174 && echo "default-k8s-different-port-20220921221118-10174" | sudo tee /etc/hostname
	I0921 22:24:02.439038  283599 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34230->127.0.0.1:49443: read: connection reset by peer
	I0921 22:24:05.584682  283599 main.go:134] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20220921221118-10174
	
	I0921 22:24:05.584766  283599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:24:05.608825  283599 main.go:134] libmachine: Using SSH client type: native
	I0921 22:24:05.609026  283599 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ec520] 0x7ef6a0 <nil>  [] 0s} 127.0.0.1 49443 <nil> <nil>}
	I0921 22:24:05.609059  283599 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20220921221118-10174' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20220921221118-10174/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20220921221118-10174' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0921 22:24:05.739656  283599 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0921 22:24:05.739694  283599 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube}
	I0921 22:24:05.739749  283599 ubuntu.go:177] setting up certificates
	I0921 22:24:05.739765  283599 provision.go:83] configureAuth start
	I0921 22:24:05.739824  283599 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220921221118-10174
	I0921 22:24:05.764789  283599 provision.go:138] copyHostCerts
	I0921 22:24:05.764839  283599 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.pem, removing ...
	I0921 22:24:05.764846  283599 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.pem
	I0921 22:24:05.764904  283599 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.pem (1078 bytes)
	I0921 22:24:05.764993  283599 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cert.pem, removing ...
	I0921 22:24:05.765005  283599 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cert.pem
	I0921 22:24:05.765028  283599 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cert.pem (1123 bytes)
	I0921 22:24:05.765086  283599 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/key.pem, removing ...
	I0921 22:24:05.765095  283599 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/key.pem
	I0921 22:24:05.765118  283599 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/key.pem (1675 bytes)
	I0921 22:24:05.765169  283599 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20220921221118-10174 san=[192.168.85.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20220921221118-10174]
	I0921 22:24:05.914466  283599 provision.go:172] copyRemoteCerts
	I0921 22:24:05.914534  283599 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0921 22:24:05.914564  283599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:24:05.939805  283599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49443 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa Username:docker}
	I0921 22:24:06.031315  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0921 22:24:06.048618  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server.pem --> /etc/docker/server.pem (1310 bytes)
	I0921 22:24:06.065530  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0921 22:24:06.083800  283599 provision.go:86] duration metric: configureAuth took 344.021748ms
	I0921 22:24:06.083828  283599 ubuntu.go:193] setting minikube options for container-runtime
	I0921 22:24:06.083988  283599 config.go:180] Loaded profile config "default-k8s-different-port-20220921221118-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.2
	I0921 22:24:06.083999  283599 machine.go:91] provisioned docker machine in 3.670987023s
	I0921 22:24:06.084006  283599 start.go:300] post-start starting for "default-k8s-different-port-20220921221118-10174" (driver="docker")
	I0921 22:24:06.084012  283599 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0921 22:24:06.084049  283599 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0921 22:24:06.084088  283599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:24:06.108286  283599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49443 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa Username:docker}
	I0921 22:24:06.203139  283599 ssh_runner.go:195] Run: cat /etc/os-release
	I0921 22:24:06.205811  283599 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0921 22:24:06.205839  283599 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0921 22:24:06.205852  283599 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0921 22:24:06.205864  283599 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0921 22:24:06.205880  283599 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/addons for local assets ...
	I0921 22:24:06.205944  283599 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files for local assets ...
	I0921 22:24:06.206037  283599 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/101742.pem -> 101742.pem in /etc/ssl/certs
	I0921 22:24:06.206142  283599 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0921 22:24:06.212569  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/101742.pem --> /etc/ssl/certs/101742.pem (1708 bytes)
	I0921 22:24:06.229418  283599 start.go:303] post-start completed in 145.398445ms
	I0921 22:24:06.229483  283599 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:24:06.229517  283599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:24:06.253305  283599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49443 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa Username:docker}
	I0921 22:24:06.340119  283599 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:24:06.344050  283599 fix.go:57] fixHost completed within 4.367385464s
	I0921 22:24:06.344071  283599 start.go:83] releasing machines lock for "default-k8s-different-port-20220921221118-10174", held for 4.367430848s
	I0921 22:24:06.344157  283599 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220921221118-10174
	I0921 22:24:06.368445  283599 ssh_runner.go:195] Run: systemctl --version
	I0921 22:24:06.368501  283599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:24:06.368505  283599 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0921 22:24:06.368550  283599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:24:06.394444  283599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49443 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa Username:docker}
	I0921 22:24:06.396066  283599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49443 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa Username:docker}
	I0921 22:24:06.513229  283599 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0921 22:24:06.524587  283599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0921 22:24:06.533746  283599 docker.go:188] disabling docker service ...
	I0921 22:24:06.533795  283599 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0921 22:24:06.543075  283599 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0921 22:24:06.551813  283599 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0921 22:24:06.629483  283599 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0921 22:24:01.818416  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:04.317912  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:05.288966  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:07.290168  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:06.707030  283599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0921 22:24:06.717244  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0921 22:24:06.729638  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "registry.k8s.io/pause:3.8"|' -i /etc/containerd/config.toml"
	I0921 22:24:06.737194  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0921 22:24:06.744928  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0921 22:24:06.752650  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0921 22:24:06.760419  283599 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0921 22:24:06.766584  283599 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0921 22:24:06.772903  283599 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0921 22:24:06.844578  283599 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0921 22:24:06.917291  283599 start.go:450] Will wait 60s for socket path /run/containerd/containerd.sock
	I0921 22:24:06.917353  283599 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0921 22:24:06.921118  283599 start.go:471] Will wait 60s for crictl version
	I0921 22:24:06.921184  283599 ssh_runner.go:195] Run: sudo crictl version
	I0921 22:24:06.948257  283599 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-09-21T22:24:06Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0921 22:24:06.817672  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:09.317278  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:11.317829  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:09.789185  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:12.289080  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:13.817410  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:15.817496  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:17.995620  283599 ssh_runner.go:195] Run: sudo crictl version
	I0921 22:24:18.018705  283599 start.go:480] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.8
	RuntimeApiVersion:  v1alpha2
	I0921 22:24:18.018768  283599 ssh_runner.go:195] Run: containerd --version
	I0921 22:24:18.047337  283599 ssh_runner.go:195] Run: containerd --version
	I0921 22:24:18.078051  283599 out.go:177] * Preparing Kubernetes v1.25.2 on containerd 1.6.8 ...
	I0921 22:24:14.289667  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:16.789199  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:18.079491  283599 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220921221118-10174 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 22:24:18.103308  283599 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0921 22:24:18.106553  283599 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0921 22:24:18.115993  283599 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime containerd
	I0921 22:24:18.116056  283599 ssh_runner.go:195] Run: sudo crictl images --output json
	I0921 22:24:18.139896  283599 containerd.go:553] all images are preloaded for containerd runtime.
	I0921 22:24:18.139921  283599 containerd.go:467] Images already preloaded, skipping extraction
	I0921 22:24:18.139964  283599 ssh_runner.go:195] Run: sudo crictl images --output json
	I0921 22:24:18.163323  283599 containerd.go:553] all images are preloaded for containerd runtime.
	I0921 22:24:18.163344  283599 cache_images.go:84] Images are preloaded, skipping loading
	I0921 22:24:18.163382  283599 ssh_runner.go:195] Run: sudo crictl info
	I0921 22:24:18.186911  283599 cni.go:95] Creating CNI manager for ""
	I0921 22:24:18.186935  283599 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0921 22:24:18.186948  283599 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0921 22:24:18.186961  283599 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.25.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20220921221118-10174 NodeName:default-k8s-different-port-20220921221118-10174 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgr
oupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
	I0921 22:24:18.187074  283599 kubeadm.go:161] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "default-k8s-different-port-20220921221118-10174"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0921 22:24:18.187152  283599 kubeadm.go:962] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=default-k8s-different-port-20220921221118-10174 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.2 ClusterName:default-k8s-different-port-20220921221118-10174 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0921 22:24:18.187196  283599 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.2
	I0921 22:24:18.194012  283599 binaries.go:44] Found k8s binaries, skipping transfer
	I0921 22:24:18.194081  283599 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0921 22:24:18.200606  283599 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (540 bytes)
	I0921 22:24:18.212899  283599 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0921 22:24:18.224754  283599 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2076 bytes)
	I0921 22:24:18.236775  283599 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0921 22:24:18.239439  283599 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0921 22:24:18.248263  283599 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174 for IP: 192.168.85.2
	I0921 22:24:18.248377  283599 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.key
	I0921 22:24:18.248421  283599 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/proxy-client-ca.key
	I0921 22:24:18.248485  283599 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/client.key
	I0921 22:24:18.248538  283599 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/apiserver.key.43b9df8c
	I0921 22:24:18.248575  283599 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/proxy-client.key
	I0921 22:24:18.248658  283599 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/10174.pem (1338 bytes)
	W0921 22:24:18.248689  283599 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/10174_empty.pem, impossibly tiny 0 bytes
	I0921 22:24:18.248705  283599 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca-key.pem (1679 bytes)
	I0921 22:24:18.248729  283599 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem (1078 bytes)
	I0921 22:24:18.248758  283599 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/cert.pem (1123 bytes)
	I0921 22:24:18.248780  283599 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/key.pem (1675 bytes)
	I0921 22:24:18.248846  283599 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/101742.pem (1708 bytes)
	I0921 22:24:18.249439  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0921 22:24:18.265894  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0921 22:24:18.282128  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0921 22:24:18.298690  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0921 22:24:18.315323  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0921 22:24:18.331842  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0921 22:24:18.348196  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0921 22:24:18.364368  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0921 22:24:18.380401  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0921 22:24:18.396696  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/10174.pem --> /usr/share/ca-certificates/10174.pem (1338 bytes)
	I0921 22:24:18.413238  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/101742.pem --> /usr/share/ca-certificates/101742.pem (1708 bytes)
	I0921 22:24:18.429482  283599 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0921 22:24:18.441654  283599 ssh_runner.go:195] Run: openssl version
	I0921 22:24:18.446184  283599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0921 22:24:18.453215  283599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0921 22:24:18.456119  283599 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Sep 21 21:28 /usr/share/ca-certificates/minikubeCA.pem
	I0921 22:24:18.456166  283599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0921 22:24:18.460690  283599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0921 22:24:18.467196  283599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10174.pem && ln -fs /usr/share/ca-certificates/10174.pem /etc/ssl/certs/10174.pem"
	I0921 22:24:18.474449  283599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10174.pem
	I0921 22:24:18.477401  283599 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Sep 21 21:32 /usr/share/ca-certificates/10174.pem
	I0921 22:24:18.477445  283599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10174.pem
	I0921 22:24:18.481956  283599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10174.pem /etc/ssl/certs/51391683.0"
	I0921 22:24:18.488418  283599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/101742.pem && ln -fs /usr/share/ca-certificates/101742.pem /etc/ssl/certs/101742.pem"
	I0921 22:24:18.495604  283599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/101742.pem
	I0921 22:24:18.498556  283599 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Sep 21 21:32 /usr/share/ca-certificates/101742.pem
	I0921 22:24:18.498600  283599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/101742.pem
	I0921 22:24:18.503245  283599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/101742.pem /etc/ssl/certs/3ec20f2e.0"
	I0921 22:24:18.509856  283599 kubeadm.go:396] StartCluster: {Name:default-k8s-different-port-20220921221118-10174 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:default-k8s-different-port-20220921221118-10174 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/
minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 22:24:18.509953  283599 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0921 22:24:18.509985  283599 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0921 22:24:18.533346  283599 cri.go:87] found id: "1058c41aafbb87ce5fc70eca7e064de03a01eb48e2da027b4515730997b03de5"
	I0921 22:24:18.533375  283599 cri.go:87] found id: "e1b3d54125fe2aff15e0a996ae0be62597db4065ff9a9d41a3b0e598b97b1608"
	I0921 22:24:18.533382  283599 cri.go:87] found id: "2654f64b12dee801006bf0c4b82dc6839422d58571d4bc74cee43fe7fdfe9b01"
	I0921 22:24:18.533388  283599 cri.go:87] found id: "1bb50adc50b7e339b9e7fee763126f6f17a621dcc78637bab78187b50e68bbf2"
	I0921 22:24:18.533393  283599 cri.go:87] found id: "9e931b83ea689a63f7a3861c1e0f7076f722e68890221c6209b05b3b852c12d7"
	I0921 22:24:18.533402  283599 cri.go:87] found id: "8a3d458869b0951d2fe3dd93e9d05a549b058f95f84b2ad0685292ebb3428767"
	I0921 22:24:18.533407  283599 cri.go:87] found id: ""
	I0921 22:24:18.533444  283599 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0921 22:24:18.545553  283599 cri.go:114] JSON = null
	W0921 22:24:18.545605  283599 kubeadm.go:403] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I0921 22:24:18.545686  283599 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0921 22:24:18.552635  283599 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I0921 22:24:18.552664  283599 kubeadm.go:627] restartCluster start
	I0921 22:24:18.552705  283599 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0921 22:24:18.558944  283599 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:18.559817  283599 kubeconfig.go:116] verify returned: extract IP: "default-k8s-different-port-20220921221118-10174" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
	I0921 22:24:18.560296  283599 kubeconfig.go:127] "default-k8s-different-port-20220921221118-10174" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig - will repair!
	I0921 22:24:18.561146  283599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig: {Name:mkdec5faf1bb182b50a3cd5458b352f38075dc15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:24:18.562655  283599 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0921 22:24:18.568841  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:18.568884  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:18.576584  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:18.776932  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:18.777023  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:18.786228  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:18.977461  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:18.977542  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:18.986186  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:19.177398  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:19.177487  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:19.186159  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:19.377453  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:19.377534  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:19.385921  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:19.577206  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:19.577296  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:19.586370  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:19.777572  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:19.777676  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:19.786797  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:19.977103  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:19.977188  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:19.985822  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:20.177132  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:20.177234  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:20.185876  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:20.377187  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:20.377298  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:20.386086  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:20.577399  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:20.577488  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:20.586142  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:20.777447  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:20.777527  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:20.786547  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:20.976769  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:20.976865  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:20.985682  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:21.176870  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:21.176951  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:21.185811  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:21.377116  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:21.377184  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:21.385829  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:21.577109  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:21.577202  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:21.585911  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:21.585933  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:21.585979  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:21.593866  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:21.593893  283599 kubeadm.go:602] needs reconfigure: apiserver error: timed out waiting for the condition
	I0921 22:24:21.593899  283599 kubeadm.go:1114] stopping kube-system containers ...
	I0921 22:24:21.593908  283599 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0921 22:24:21.593964  283599 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0921 22:24:21.618017  283599 cri.go:87] found id: "1058c41aafbb87ce5fc70eca7e064de03a01eb48e2da027b4515730997b03de5"
	I0921 22:24:21.618041  283599 cri.go:87] found id: "e1b3d54125fe2aff15e0a996ae0be62597db4065ff9a9d41a3b0e598b97b1608"
	I0921 22:24:21.618048  283599 cri.go:87] found id: "2654f64b12dee801006bf0c4b82dc6839422d58571d4bc74cee43fe7fdfe9b01"
	I0921 22:24:21.618058  283599 cri.go:87] found id: "1bb50adc50b7e339b9e7fee763126f6f17a621dcc78637bab78187b50e68bbf2"
	I0921 22:24:21.618064  283599 cri.go:87] found id: "9e931b83ea689a63f7a3861c1e0f7076f722e68890221c6209b05b3b852c12d7"
	I0921 22:24:21.618072  283599 cri.go:87] found id: "8a3d458869b0951d2fe3dd93e9d05a549b058f95f84b2ad0685292ebb3428767"
	I0921 22:24:21.618078  283599 cri.go:87] found id: ""
	I0921 22:24:21.618082  283599 cri.go:232] Stopping containers: [1058c41aafbb87ce5fc70eca7e064de03a01eb48e2da027b4515730997b03de5 e1b3d54125fe2aff15e0a996ae0be62597db4065ff9a9d41a3b0e598b97b1608 2654f64b12dee801006bf0c4b82dc6839422d58571d4bc74cee43fe7fdfe9b01 1bb50adc50b7e339b9e7fee763126f6f17a621dcc78637bab78187b50e68bbf2 9e931b83ea689a63f7a3861c1e0f7076f722e68890221c6209b05b3b852c12d7 8a3d458869b0951d2fe3dd93e9d05a549b058f95f84b2ad0685292ebb3428767]
	I0921 22:24:21.618118  283599 ssh_runner.go:195] Run: which crictl
	I0921 22:24:21.621347  283599 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop 1058c41aafbb87ce5fc70eca7e064de03a01eb48e2da027b4515730997b03de5 e1b3d54125fe2aff15e0a996ae0be62597db4065ff9a9d41a3b0e598b97b1608 2654f64b12dee801006bf0c4b82dc6839422d58571d4bc74cee43fe7fdfe9b01 1bb50adc50b7e339b9e7fee763126f6f17a621dcc78637bab78187b50e68bbf2 9e931b83ea689a63f7a3861c1e0f7076f722e68890221c6209b05b3b852c12d7 8a3d458869b0951d2fe3dd93e9d05a549b058f95f84b2ad0685292ebb3428767
	I0921 22:24:21.645531  283599 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0921 22:24:21.655622  283599 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0921 22:24:21.662408  283599 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Sep 21 22:11 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Sep 21 22:11 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2127 Sep 21 22:11 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Sep 21 22:11 /etc/kubernetes/scheduler.conf
	
	I0921 22:24:21.662459  283599 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0921 22:24:21.669029  283599 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0921 22:24:21.675699  283599 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0921 22:24:21.682316  283599 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:21.682358  283599 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0921 22:24:21.688501  283599 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0921 22:24:17.817867  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:19.818111  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:18.789856  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:21.289803  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:21.694928  283599 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:21.696684  283599 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0921 22:24:21.703329  283599 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0921 22:24:21.710109  283599 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0921 22:24:21.710132  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0921 22:24:21.757457  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0921 22:24:22.810948  283599 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.053458682s)
	I0921 22:24:22.810976  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0921 22:24:22.943243  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0921 22:24:22.995873  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0921 22:24:23.097694  283599 api_server.go:51] waiting for apiserver process to appear ...
	I0921 22:24:23.097766  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 22:24:23.608210  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 22:24:24.107567  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 22:24:24.187217  283599 api_server.go:71] duration metric: took 1.089523123s to wait for apiserver process to appear ...
	I0921 22:24:24.187296  283599 api_server.go:87] waiting for apiserver healthz status ...
	I0921 22:24:24.187323  283599 api_server.go:240] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I0921 22:24:24.187688  283599 api_server.go:256] stopped: https://192.168.85.2:8444/healthz: Get "https://192.168.85.2:8444/healthz": dial tcp 192.168.85.2:8444: connect: connection refused
	I0921 22:24:24.688449  283599 api_server.go:240] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I0921 22:24:22.317667  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:24.317872  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:23.789425  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:25.789684  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:27.790412  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:27.592182  283599 api_server.go:266] https://192.168.85.2:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0921 22:24:27.592315  283599 api_server.go:102] status: https://192.168.85.2:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0921 22:24:27.688579  283599 api_server.go:240] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I0921 22:24:27.694601  283599 api_server.go:266] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0921 22:24:27.694667  283599 api_server.go:102] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0921 22:24:28.187832  283599 api_server.go:240] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I0921 22:24:28.192979  283599 api_server.go:266] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0921 22:24:28.193004  283599 api_server.go:102] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0921 22:24:28.688623  283599 api_server.go:240] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I0921 22:24:28.695172  283599 api_server.go:266] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0921 22:24:28.695285  283599 api_server.go:102] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0921 22:24:29.187841  283599 api_server.go:240] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I0921 22:24:29.193157  283599 api_server.go:266] https://192.168.85.2:8444/healthz returned 200:
	ok
	I0921 22:24:29.198775  283599 api_server.go:140] control plane version: v1.25.2
	I0921 22:24:29.198796  283599 api_server.go:130] duration metric: took 5.011488882s to wait for apiserver health ...
	I0921 22:24:29.198805  283599 cni.go:95] Creating CNI manager for ""
	I0921 22:24:29.198812  283599 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0921 22:24:29.201314  283599 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0921 22:24:29.202798  283599 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0921 22:24:29.206616  283599 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.2/kubectl ...
	I0921 22:24:29.206636  283599 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0921 22:24:29.221913  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0921 22:24:29.826767  283599 system_pods.go:43] waiting for kube-system pods to appear ...
	I0921 22:24:29.834488  283599 system_pods.go:59] 9 kube-system pods found
	I0921 22:24:29.834517  283599 system_pods.go:61] "coredns-565d847f94-mrkjn" [7f364c47-74ce-4271-aab1-67bba320c586] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0921 22:24:29.834528  283599 system_pods.go:61] "etcd-default-k8s-different-port-20220921221118-10174" [8f0f58a7-7eae-43db-840f-bde95464e94e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0921 22:24:29.834533  283599 system_pods.go:61] "kindnet-7wbpp" [3f16ae0b-2f66-4f1e-b234-74570472a7f8] Running
	I0921 22:24:29.834539  283599 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220921221118-10174" [3a935d6b-ca77-4bcb-ae19-0a2af77c12a1] Running
	I0921 22:24:29.834544  283599 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220921221118-10174" [d01ee91a-5587-48e9-a235-68a73d5fedef] Running
	I0921 22:24:29.834549  283599 system_pods.go:61] "kube-proxy-lzphc" [611dbd37-0771-41b2-b886-93f46d79f802] Running
	I0921 22:24:29.834554  283599 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220921221118-10174" [998713da-f133-43f7-9f11-c6110ad66c8d] Running
	I0921 22:24:29.834561  283599 system_pods.go:61] "metrics-server-5c8fd5cf8-sshzh" [5972fae5-09c2-4e2e-b609-ef85f72311e4] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0921 22:24:29.834572  283599 system_pods.go:61] "storage-provisioner" [ca16dea1-fb3d-4cc1-b449-2236aefcc627] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0921 22:24:29.834577  283599 system_pods.go:74] duration metric: took 7.786123ms to wait for pod list to return data ...
	I0921 22:24:29.834588  283599 node_conditions.go:102] verifying NodePressure condition ...
	I0921 22:24:29.837059  283599 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0921 22:24:29.837085  283599 node_conditions.go:123] node cpu capacity is 8
	I0921 22:24:29.837096  283599 node_conditions.go:105] duration metric: took 2.500371ms to run NodePressure ...
	I0921 22:24:29.837121  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0921 22:24:30.025715  283599 kubeadm.go:763] waiting for restarted kubelet to initialise ...
	I0921 22:24:30.029542  283599 kubeadm.go:778] kubelet initialised
	I0921 22:24:30.029565  283599 kubeadm.go:779] duration metric: took 3.826857ms waiting for restarted kubelet to initialise ...
	I0921 22:24:30.029572  283599 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0921 22:24:30.034316  283599 pod_ready.go:78] waiting up to 4m0s for pod "coredns-565d847f94-mrkjn" in "kube-system" namespace to be "Ready" ...
	I0921 22:24:26.817684  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:29.317793  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:31.318001  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:30.289213  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:32.789335  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:32.039865  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:34.040511  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:36.539322  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:33.817371  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:35.817456  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:34.789530  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:37.289284  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:38.539700  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:41.040333  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:37.817967  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:40.318244  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:39.789967  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:42.289726  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:43.539636  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:45.540134  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:42.817716  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:44.818139  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:44.789355  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:47.288847  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:48.040425  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:50.539475  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:47.317825  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:49.318211  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:49.289182  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:51.289938  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:52.539590  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:54.540310  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:51.817491  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:53.818080  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:55.818165  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:53.789719  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:56.289013  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:57.040311  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:59.539775  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:58.318151  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:00.318254  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:58.289251  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:00.789124  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:02.789910  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:02.040207  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:04.540336  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:02.817283  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:04.817911  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:05.290121  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:07.789553  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:07.039774  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:09.039928  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:11.040136  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:07.318317  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:09.817957  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:10.289528  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:12.789110  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:13.540022  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:16.040513  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:12.317490  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:14.818433  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:14.789413  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:16.789947  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:18.539457  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:21.040423  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:17.317880  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:19.817565  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:19.289330  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:21.789335  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:23.539701  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:26.039677  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:22.317640  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:24.318075  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:23.789488  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:25.789726  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:28.539400  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:30.540154  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:26.817737  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:28.818270  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:31.318310  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:28.289323  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:30.789442  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:32.789667  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:33.039502  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:35.039801  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:33.318392  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:35.818247  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:34.790488  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:37.288758  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:37.539221  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:39.539681  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:41.539999  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:38.317564  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:40.317641  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:39.289052  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:41.789424  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:44.040284  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:46.540320  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:42.818080  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:45.317732  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:44.289331  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:46.789866  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:49.039837  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:51.540123  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:47.817565  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:49.314620  276511 pod_ready.go:81] duration metric: took 4m0.002300536s waiting for pod "coredns-565d847f94-m8xgt" in "kube-system" namespace to be "Ready" ...
	E0921 22:25:49.314670  276511 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-565d847f94-m8xgt" in "kube-system" namespace to be "Ready" (will not retry!)
	I0921 22:25:49.314692  276511 pod_ready.go:38] duration metric: took 4m0.007078344s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0921 22:25:49.314717  276511 kubeadm.go:631] restartCluster took 4m10.710033944s
	W0921 22:25:49.314858  276511 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0921 22:25:49.314887  276511 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0921 22:25:49.289362  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:51.789574  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:54.040292  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:56.540637  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:52.154431  276511 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (2.839517184s)
	I0921 22:25:52.154487  276511 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0921 22:25:52.163969  276511 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0921 22:25:52.170969  276511 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0921 22:25:52.171027  276511 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0921 22:25:52.177996  276511 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0921 22:25:52.178063  276511 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0921 22:25:52.213969  276511 kubeadm.go:317] W0921 22:25:52.213140    3321 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0921 22:25:52.246713  276511 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1017-gcp\n", err: exit status 1
	I0921 22:25:52.310910  276511 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0921 22:25:54.288796  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:56.289801  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:26:01.184243  276511 kubeadm.go:317] [init] Using Kubernetes version: v1.25.2
	I0921 22:26:01.184314  276511 kubeadm.go:317] [preflight] Running pre-flight checks
	I0921 22:26:01.184416  276511 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I0921 22:26:01.184507  276511 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1017-gcp
	I0921 22:26:01.184592  276511 kubeadm.go:317] OS: Linux
	I0921 22:26:01.184673  276511 kubeadm.go:317] CGROUPS_CPU: enabled
	I0921 22:26:01.184737  276511 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I0921 22:26:01.184793  276511 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I0921 22:26:01.184856  276511 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I0921 22:26:01.184921  276511 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I0921 22:26:01.184985  276511 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I0921 22:26:01.185046  276511 kubeadm.go:317] CGROUPS_PIDS: enabled
	I0921 22:26:01.185099  276511 kubeadm.go:317] CGROUPS_HUGETLB: enabled
	I0921 22:26:01.185157  276511 kubeadm.go:317] CGROUPS_BLKIO: enabled
	I0921 22:26:01.185254  276511 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0921 22:26:01.185380  276511 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0921 22:26:01.185526  276511 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0921 22:26:01.185623  276511 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0921 22:26:01.187463  276511 out.go:204]   - Generating certificates and keys ...
	I0921 22:26:01.187540  276511 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0921 22:26:01.187594  276511 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0921 22:26:01.187659  276511 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0921 22:26:01.187785  276511 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I0921 22:26:01.187900  276511 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I0921 22:26:01.187958  276511 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I0921 22:26:01.188014  276511 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I0921 22:26:01.188086  276511 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I0921 22:26:01.188221  276511 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0921 22:26:01.188336  276511 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0921 22:26:01.188409  276511 kubeadm.go:317] [certs] Using the existing "sa" key
	I0921 22:26:01.188488  276511 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0921 22:26:01.188556  276511 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0921 22:26:01.188636  276511 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0921 22:26:01.188731  276511 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0921 22:26:01.188817  276511 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0921 22:26:01.188953  276511 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0921 22:26:01.189087  276511 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0921 22:26:01.189191  276511 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I0921 22:26:01.189310  276511 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0921 22:26:01.191284  276511 out.go:204]   - Booting up control plane ...
	I0921 22:26:01.191385  276511 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0921 22:26:01.191486  276511 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0921 22:26:01.191561  276511 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0921 22:26:01.191748  276511 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0921 22:26:01.191985  276511 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0921 22:26:01.192105  276511 kubeadm.go:317] [apiclient] All control plane components are healthy after 6.503275 seconds
	I0921 22:26:01.192289  276511 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0921 22:26:01.192460  276511 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0921 22:26:01.192545  276511 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs
	I0921 22:26:01.192839  276511 kubeadm.go:317] [mark-control-plane] Marking the node no-preload-20220921220832-10174 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0921 22:26:01.192906  276511 kubeadm.go:317] [bootstrap-token] Using token: 9ldpwz.b05pw96cyce3l1nr
	I0921 22:26:01.194593  276511 out.go:204]   - Configuring RBAC rules ...
	I0921 22:26:01.194724  276511 kubeadm.go:317] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0921 22:26:01.194852  276511 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0921 22:26:01.195058  276511 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0921 22:26:01.195234  276511 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0921 22:26:01.195387  276511 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0921 22:26:01.195500  276511 kubeadm.go:317] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0921 22:26:01.195644  276511 kubeadm.go:317] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0921 22:26:01.195703  276511 kubeadm.go:317] [addons] Applied essential addon: CoreDNS
	I0921 22:26:01.195765  276511 kubeadm.go:317] [addons] Applied essential addon: kube-proxy
	I0921 22:26:01.195777  276511 kubeadm.go:317] 
	I0921 22:26:01.195861  276511 kubeadm.go:317] Your Kubernetes control-plane has initialized successfully!
	I0921 22:26:01.195872  276511 kubeadm.go:317] 
	I0921 22:26:01.195980  276511 kubeadm.go:317] To start using your cluster, you need to run the following as a regular user:
	I0921 22:26:01.196004  276511 kubeadm.go:317] 
	I0921 22:26:01.196036  276511 kubeadm.go:317]   mkdir -p $HOME/.kube
	I0921 22:26:01.196117  276511 kubeadm.go:317]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0921 22:26:01.196194  276511 kubeadm.go:317]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0921 22:26:01.196207  276511 kubeadm.go:317] 
	I0921 22:26:01.196286  276511 kubeadm.go:317] Alternatively, if you are the root user, you can run:
	I0921 22:26:01.196303  276511 kubeadm.go:317] 
	I0921 22:26:01.196379  276511 kubeadm.go:317]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0921 22:26:01.196404  276511 kubeadm.go:317] 
	I0921 22:26:01.196485  276511 kubeadm.go:317] You should now deploy a pod network to the cluster.
	I0921 22:26:01.196595  276511 kubeadm.go:317] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0921 22:26:01.196694  276511 kubeadm.go:317]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0921 22:26:01.196706  276511 kubeadm.go:317] 
	I0921 22:26:01.196820  276511 kubeadm.go:317] You can now join any number of control-plane nodes by copying certificate authorities
	I0921 22:26:01.196920  276511 kubeadm.go:317] and service account keys on each node and then running the following as root:
	I0921 22:26:01.196931  276511 kubeadm.go:317] 
	I0921 22:26:01.197032  276511 kubeadm.go:317]   kubeadm join control-plane.minikube.internal:8443 --token 9ldpwz.b05pw96cyce3l1nr \
	I0921 22:26:01.197181  276511 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:b419e756666ce2e30bdac31039f21ad7242d26e2b9c03a55c5bc6fe8dcfddcf7 \
	I0921 22:26:01.197220  276511 kubeadm.go:317] 	--control-plane 
	I0921 22:26:01.197231  276511 kubeadm.go:317] 
	I0921 22:26:01.197362  276511 kubeadm.go:317] Then you can join any number of worker nodes by running the following on each as root:
	I0921 22:26:01.197381  276511 kubeadm.go:317] 
	I0921 22:26:01.197495  276511 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8443 --token 9ldpwz.b05pw96cyce3l1nr \
	I0921 22:26:01.197628  276511 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:b419e756666ce2e30bdac31039f21ad7242d26e2b9c03a55c5bc6fe8dcfddcf7 
	I0921 22:26:01.197660  276511 cni.go:95] Creating CNI manager for ""
	I0921 22:26:01.197674  276511 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0921 22:26:01.199797  276511 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0921 22:25:59.039749  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:01.040507  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:01.201405  276511 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0921 22:26:01.205181  276511 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.2/kubectl ...
	I0921 22:26:01.205199  276511 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0921 22:26:01.218971  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0921 22:25:58.789397  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:26:00.789911  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:26:03.540344  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:06.039881  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:02.006490  276511 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0921 22:26:02.006560  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:02.006575  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl label nodes minikube.k8s.io/version=v1.27.0 minikube.k8s.io/commit=937c68716dfaac5b5ffa3b6655158d5d3472b8c4 minikube.k8s.io/name=no-preload-20220921220832-10174 minikube.k8s.io/updated_at=2022_09_21T22_26_02_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:02.013858  276511 ops.go:34] apiserver oom_adj: -16
	I0921 22:26:02.099832  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:02.694112  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:03.194089  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:03.693535  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:04.193854  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:04.693713  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:05.194101  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:05.694288  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:06.193619  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:06.693501  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:03.289345  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:26:05.789183  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:26:08.040230  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:10.539463  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:07.193590  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:07.693901  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:08.194072  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:08.694197  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:09.193914  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:09.693488  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:10.194416  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:10.693496  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:11.194435  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:11.694097  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:08.289258  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:26:10.789536  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:26:12.790035  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:26:12.194461  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:12.694279  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:13.193818  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:13.693711  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:13.758985  276511 kubeadm.go:1067] duration metric: took 11.752476269s to wait for elevateKubeSystemPrivileges.
	I0921 22:26:13.759013  276511 kubeadm.go:398] StartCluster complete in 4m35.198807914s
	I0921 22:26:13.759030  276511 settings.go:142] acquiring lock: {Name:mk2e017b0c75e33bad2b3a546d00214b38a21694 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:26:13.759144  276511 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
	I0921 22:26:13.760661  276511 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig: {Name:mkdec5faf1bb182b50a3cd5458b352f38075dc15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:26:14.276964  276511 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "no-preload-20220921220832-10174" rescaled to 1
	I0921 22:26:14.277021  276511 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0921 22:26:14.277060  276511 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0921 22:26:14.279846  276511 out.go:177] * Verifying Kubernetes components...
	I0921 22:26:14.277154  276511 addons.go:412] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0921 22:26:14.277306  276511 config.go:180] Loaded profile config "no-preload-20220921220832-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.2
	I0921 22:26:14.281313  276511 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0921 22:26:14.281349  276511 addons.go:65] Setting default-storageclass=true in profile "no-preload-20220921220832-10174"
	I0921 22:26:14.281359  276511 addons.go:65] Setting storage-provisioner=true in profile "no-preload-20220921220832-10174"
	I0921 22:26:14.281373  276511 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-20220921220832-10174"
	I0921 22:26:14.281387  276511 addons.go:65] Setting metrics-server=true in profile "no-preload-20220921220832-10174"
	I0921 22:26:14.281397  276511 addons.go:65] Setting dashboard=true in profile "no-preload-20220921220832-10174"
	I0921 22:26:14.281436  276511 addons.go:153] Setting addon dashboard=true in "no-preload-20220921220832-10174"
	W0921 22:26:14.281450  276511 addons.go:162] addon dashboard should already be in state true
	I0921 22:26:14.281497  276511 host.go:66] Checking if "no-preload-20220921220832-10174" exists ...
	I0921 22:26:14.281400  276511 addons.go:153] Setting addon metrics-server=true in "no-preload-20220921220832-10174"
	W0921 22:26:14.281576  276511 addons.go:162] addon metrics-server should already be in state true
	I0921 22:26:14.281377  276511 addons.go:153] Setting addon storage-provisioner=true in "no-preload-20220921220832-10174"
	I0921 22:26:14.281640  276511 host.go:66] Checking if "no-preload-20220921220832-10174" exists ...
	W0921 22:26:14.281653  276511 addons.go:162] addon storage-provisioner should already be in state true
	I0921 22:26:14.281684  276511 host.go:66] Checking if "no-preload-20220921220832-10174" exists ...
	I0921 22:26:14.281727  276511 cli_runner.go:164] Run: docker container inspect no-preload-20220921220832-10174 --format={{.State.Status}}
	I0921 22:26:14.282004  276511 cli_runner.go:164] Run: docker container inspect no-preload-20220921220832-10174 --format={{.State.Status}}
	I0921 22:26:14.282138  276511 cli_runner.go:164] Run: docker container inspect no-preload-20220921220832-10174 --format={{.State.Status}}
	I0921 22:26:14.282139  276511 cli_runner.go:164] Run: docker container inspect no-preload-20220921220832-10174 --format={{.State.Status}}
	I0921 22:26:14.321366  276511 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0921 22:26:14.323218  276511 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0921 22:26:14.323243  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0921 22:26:14.323321  276511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220832-10174
	I0921 22:26:14.323433  276511 addons.go:153] Setting addon default-storageclass=true in "no-preload-20220921220832-10174"
	W0921 22:26:14.323452  276511 addons.go:162] addon default-storageclass should already be in state true
	I0921 22:26:14.323478  276511 host.go:66] Checking if "no-preload-20220921220832-10174" exists ...
	I0921 22:26:14.323995  276511 cli_runner.go:164] Run: docker container inspect no-preload-20220921220832-10174 --format={{.State.Status}}
	I0921 22:26:14.331074  276511 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.6.0
	I0921 22:26:14.333243  276511 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0921 22:26:14.335670  276511 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0921 22:26:14.335699  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0921 22:26:14.335828  276511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220832-10174
	I0921 22:26:14.338700  276511 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0921 22:26:12.540251  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:15.040305  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:14.339971  276511 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0921 22:26:14.339996  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0921 22:26:14.340067  276511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220832-10174
	I0921 22:26:14.357088  276511 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0921 22:26:14.357118  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0921 22:26:14.357179  276511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220832-10174
	I0921 22:26:14.363845  276511 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49438 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/no-preload-20220921220832-10174/id_rsa Username:docker}
	I0921 22:26:14.373248  276511 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49438 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/no-preload-20220921220832-10174/id_rsa Username:docker}
	I0921 22:26:14.374001  276511 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49438 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/no-preload-20220921220832-10174/id_rsa Username:docker}
	I0921 22:26:14.403584  276511 node_ready.go:35] waiting up to 6m0s for node "no-preload-20220921220832-10174" to be "Ready" ...
	I0921 22:26:14.403673  276511 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0921 22:26:14.403710  276511 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49438 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/no-preload-20220921220832-10174/id_rsa Username:docker}
	I0921 22:26:14.597706  276511 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0921 22:26:14.597740  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0921 22:26:14.598185  276511 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0921 22:26:14.598208  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0921 22:26:14.678717  276511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0921 22:26:14.691157  276511 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0921 22:26:14.691190  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0921 22:26:14.776824  276511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0921 22:26:14.780103  276511 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0921 22:26:14.780131  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0921 22:26:14.796772  276511 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0921 22:26:14.796802  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0921 22:26:14.877240  276511 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0921 22:26:14.877270  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0921 22:26:14.886529  276511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0921 22:26:14.982072  276511 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0921 22:26:14.982106  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4206 bytes)
	I0921 22:26:15.083042  276511 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0921 22:26:15.083073  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0921 22:26:15.185025  276511 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0921 22:26:15.185058  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0921 22:26:15.288358  276511 start.go:810] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS
	I0921 22:26:15.295798  276511 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0921 22:26:15.295830  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0921 22:26:15.390667  276511 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0921 22:26:15.390693  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0921 22:26:15.415462  276511 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0921 22:26:15.415496  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0921 22:26:15.492343  276511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0921 22:26:15.887638  276511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.208874575s)
	I0921 22:26:15.887703  276511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.110843194s)
	I0921 22:26:15.982100  276511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.095511944s)
	I0921 22:26:15.982142  276511 addons.go:383] Verifying addon metrics-server=true in "no-preload-20220921220832-10174"
	I0921 22:26:16.410487  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:16.706261  276511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.213866962s)
	I0921 22:26:16.708800  276511 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0921 22:26:16.709899  276511 addons.go:414] enableAddons completed in 2.432760887s
	I0921 22:26:15.290491  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:26:17.789818  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:26:17.539620  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:20.039549  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:18.911099  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:21.409684  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:20.289226  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:26:22.292776  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:26:22.292799  265259 node_ready.go:38] duration metric: took 4m0.017444735s waiting for node "embed-certs-20220921220439-10174" to be "Ready" ...
	I0921 22:26:22.294631  265259 out.go:177] 
	W0921 22:26:22.296115  265259 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0921 22:26:22.296143  265259 out.go:239] * 
	W0921 22:26:22.296927  265259 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0921 22:26:22.298511  265259 out.go:177] 
	I0921 22:26:22.539641  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:25.039622  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:23.410505  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:25.909606  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:27.539385  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:29.539878  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:31.540249  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:27.910578  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:30.410429  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:33.540339  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:35.541025  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:32.910296  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:34.911081  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:38.039663  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:40.539522  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:37.410360  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:39.410436  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:42.540000  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:45.040231  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:41.909862  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:43.910310  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:46.409644  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:47.540283  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:50.039510  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:48.410566  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:50.410732  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:52.039949  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:54.540144  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:52.910395  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:54.910495  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:57.039966  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:59.040209  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:01.539473  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:57.409907  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:59.410288  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:03.540044  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:06.040183  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:01.910153  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:04.409817  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:06.410562  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:08.040423  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:10.539873  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:08.910302  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:11.410571  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:13.039961  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:15.040246  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:13.909964  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:15.910369  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:17.539604  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:19.539765  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:18.410585  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:20.910125  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:22.040021  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:24.539835  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:26.540240  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:22.910441  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:25.410069  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:28.540555  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:31.039426  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:27.410438  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:29.410512  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:33.040327  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:35.040601  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:31.910290  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:34.409802  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:37.540256  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:40.039584  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:36.909982  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:39.409679  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:41.410245  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:42.539492  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:44.539613  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:46.540433  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:43.909863  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:45.910696  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:49.039750  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:51.040314  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:48.410147  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:50.410237  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:53.040407  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:55.540422  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:52.910535  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:55.410601  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:58.040486  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:28:00.540148  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:57.910322  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:59.910846  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:03.039402  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:28:05.040045  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:28:02.410370  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:04.410513  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:07.040112  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:28:09.539484  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:28:11.539916  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:28:06.910328  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:09.409926  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:11.410618  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:14.040357  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:28:16.040410  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:28:13.909830  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:15.910746  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:18.539390  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:28:20.539944  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:28:18.409773  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:20.410208  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:22.540064  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:28:25.039880  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:28:22.410702  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:24.909931  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:27.539325  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:28:29.540282  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:28:30.037464  283599 pod_ready.go:81] duration metric: took 4m0.003103432s waiting for pod "coredns-565d847f94-mrkjn" in "kube-system" namespace to be "Ready" ...
	E0921 22:28:30.037491  283599 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-565d847f94-mrkjn" in "kube-system" namespace to be "Ready" (will not retry!)
	I0921 22:28:30.037512  283599 pod_ready.go:38] duration metric: took 4m0.007931264s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0921 22:28:30.037542  283599 kubeadm.go:631] restartCluster took 4m11.484871611s
	W0921 22:28:30.037694  283599 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0921 22:28:30.037731  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0921 22:28:26.910183  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:28.910722  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:31.410255  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:32.836415  283599 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (2.798662315s)
	I0921 22:28:32.836470  283599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0921 22:28:32.846281  283599 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0921 22:28:32.853286  283599 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0921 22:28:32.853347  283599 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0921 22:28:32.860321  283599 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0921 22:28:32.860372  283599 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0921 22:28:32.899444  283599 kubeadm.go:317] [init] Using Kubernetes version: v1.25.2
	I0921 22:28:32.899530  283599 kubeadm.go:317] [preflight] Running pre-flight checks
	I0921 22:28:32.927597  283599 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I0921 22:28:32.927684  283599 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1017-gcp
	I0921 22:28:32.927762  283599 kubeadm.go:317] OS: Linux
	I0921 22:28:32.927817  283599 kubeadm.go:317] CGROUPS_CPU: enabled
	I0921 22:28:32.927857  283599 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I0921 22:28:32.927895  283599 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I0921 22:28:32.927957  283599 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I0921 22:28:32.928004  283599 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I0921 22:28:32.928045  283599 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I0921 22:28:32.928083  283599 kubeadm.go:317] CGROUPS_PIDS: enabled
	I0921 22:28:32.928121  283599 kubeadm.go:317] CGROUPS_HUGETLB: enabled
	I0921 22:28:32.928158  283599 kubeadm.go:317] CGROUPS_BLKIO: enabled
	I0921 22:28:32.994267  283599 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0921 22:28:32.994393  283599 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0921 22:28:32.994471  283599 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0921 22:28:33.113433  283599 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0921 22:28:33.118993  283599 out.go:204]   - Generating certificates and keys ...
	I0921 22:28:33.119145  283599 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0921 22:28:33.119247  283599 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0921 22:28:33.119310  283599 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0921 22:28:33.119362  283599 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I0921 22:28:33.119432  283599 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I0921 22:28:33.119501  283599 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I0921 22:28:33.119554  283599 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I0921 22:28:33.119605  283599 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I0921 22:28:33.119666  283599 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0921 22:28:33.119759  283599 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0921 22:28:33.119797  283599 kubeadm.go:317] [certs] Using the existing "sa" key
	I0921 22:28:33.119873  283599 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0921 22:28:33.240892  283599 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0921 22:28:33.319256  283599 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0921 22:28:33.514290  283599 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0921 22:28:33.579294  283599 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0921 22:28:33.591185  283599 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0921 22:28:33.591951  283599 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0921 22:28:33.592077  283599 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I0921 22:28:33.671909  283599 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0921 22:28:33.674209  283599 out.go:204]   - Booting up control plane ...
	I0921 22:28:33.674356  283599 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0921 22:28:33.674478  283599 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0921 22:28:33.675328  283599 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0921 22:28:33.677339  283599 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0921 22:28:33.679453  283599 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0921 22:28:33.410335  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:35.410708  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:40.182528  283599 kubeadm.go:317] [apiclient] All control plane components are healthy after 6.502979 seconds
	I0921 22:28:40.182719  283599 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0921 22:28:40.191775  283599 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0921 22:28:40.708308  283599 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs
	I0921 22:28:40.708506  283599 kubeadm.go:317] [mark-control-plane] Marking the node default-k8s-different-port-20220921221118-10174 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0921 22:28:41.216221  283599 kubeadm.go:317] [bootstrap-token] Using token: 7zktge.i7kw817sdpmpqput
	I0921 22:28:41.217917  283599 out.go:204]   - Configuring RBAC rules ...
	I0921 22:28:41.218062  283599 kubeadm.go:317] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0921 22:28:41.221048  283599 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0921 22:28:41.225663  283599 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0921 22:28:41.227873  283599 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0921 22:28:41.229840  283599 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0921 22:28:41.231693  283599 kubeadm.go:317] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0921 22:28:41.238509  283599 kubeadm.go:317] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0921 22:28:41.448788  283599 kubeadm.go:317] [addons] Applied essential addon: CoreDNS
	I0921 22:28:41.684596  283599 kubeadm.go:317] [addons] Applied essential addon: kube-proxy
	I0921 22:28:41.686021  283599 kubeadm.go:317] 
	I0921 22:28:41.686112  283599 kubeadm.go:317] Your Kubernetes control-plane has initialized successfully!
	I0921 22:28:41.686121  283599 kubeadm.go:317] 
	I0921 22:28:41.686213  283599 kubeadm.go:317] To start using your cluster, you need to run the following as a regular user:
	I0921 22:28:41.686221  283599 kubeadm.go:317] 
	I0921 22:28:41.686253  283599 kubeadm.go:317]   mkdir -p $HOME/.kube
	I0921 22:28:41.687200  283599 kubeadm.go:317]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0921 22:28:41.687275  283599 kubeadm.go:317]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0921 22:28:41.687282  283599 kubeadm.go:317] 
	I0921 22:28:41.687347  283599 kubeadm.go:317] Alternatively, if you are the root user, you can run:
	I0921 22:28:41.687361  283599 kubeadm.go:317] 
	I0921 22:28:41.687420  283599 kubeadm.go:317]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0921 22:28:41.687443  283599 kubeadm.go:317] 
	I0921 22:28:41.687516  283599 kubeadm.go:317] You should now deploy a pod network to the cluster.
	I0921 22:28:41.687626  283599 kubeadm.go:317] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0921 22:28:41.687754  283599 kubeadm.go:317]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0921 22:28:41.687768  283599 kubeadm.go:317] 
	I0921 22:28:41.687856  283599 kubeadm.go:317] You can now join any number of control-plane nodes by copying certificate authorities
	I0921 22:28:41.687945  283599 kubeadm.go:317] and service account keys on each node and then running the following as root:
	I0921 22:28:41.687952  283599 kubeadm.go:317] 
	I0921 22:28:41.688054  283599 kubeadm.go:317]   kubeadm join control-plane.minikube.internal:8444 --token 7zktge.i7kw817sdpmpqput \
	I0921 22:28:41.688176  283599 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:b419e756666ce2e30bdac31039f21ad7242d26e2b9c03a55c5bc6fe8dcfddcf7 \
	I0921 22:28:41.688202  283599 kubeadm.go:317] 	--control-plane 
	I0921 22:28:41.688207  283599 kubeadm.go:317] 
	I0921 22:28:41.688304  283599 kubeadm.go:317] Then you can join any number of worker nodes by running the following on each as root:
	I0921 22:28:41.688309  283599 kubeadm.go:317] 
	I0921 22:28:41.688403  283599 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8444 --token 7zktge.i7kw817sdpmpqput \
	I0921 22:28:41.688525  283599 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:b419e756666ce2e30bdac31039f21ad7242d26e2b9c03a55c5bc6fe8dcfddcf7 
	I0921 22:28:41.691473  283599 kubeadm.go:317] W0921 22:28:32.894416    3309 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0921 22:28:41.691806  283599 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1017-gcp\n", err: exit status 1
	I0921 22:28:41.691944  283599 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0921 22:28:41.691973  283599 cni.go:95] Creating CNI manager for ""
	I0921 22:28:41.691983  283599 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0921 22:28:41.694185  283599 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0921 22:28:37.910661  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:40.410644  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:41.695783  283599 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0921 22:28:41.699760  283599 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.2/kubectl ...
	I0921 22:28:41.699784  283599 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0921 22:28:41.776183  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0921 22:28:42.446104  283599 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0921 22:28:42.446180  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:42.446216  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl label nodes minikube.k8s.io/version=v1.27.0 minikube.k8s.io/commit=937c68716dfaac5b5ffa3b6655158d5d3472b8c4 minikube.k8s.io/name=default-k8s-different-port-20220921221118-10174 minikube.k8s.io/updated_at=2022_09_21T22_28_42_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:42.524814  283599 ops.go:34] apiserver oom_adj: -16
	I0921 22:28:42.524918  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:43.099884  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:43.600017  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:44.099303  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:44.599933  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:45.100173  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:45.599961  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:46.099843  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:46.599840  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:42.910093  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:44.910463  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:47.099465  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:47.599512  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:48.099998  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:48.599598  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:49.099840  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:49.599433  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:50.099931  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:50.599355  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:51.099363  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:51.599865  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:47.410019  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:49.410428  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:51.410461  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:52.099400  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:52.600056  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:53.100255  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:53.599772  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:53.668975  283599 kubeadm.go:1067] duration metric: took 11.222848116s to wait for elevateKubeSystemPrivileges.
	I0921 22:28:53.669016  283599 kubeadm.go:398] StartCluster complete in 4m35.159165946s
	I0921 22:28:53.669039  283599 settings.go:142] acquiring lock: {Name:mk2e017b0c75e33bad2b3a546d00214b38a21694 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:28:53.669157  283599 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
	I0921 22:28:53.670820  283599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig: {Name:mkdec5faf1bb182b50a3cd5458b352f38075dc15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:28:54.187769  283599 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20220921221118-10174" rescaled to 1
	I0921 22:28:54.187839  283599 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0921 22:28:54.187870  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0921 22:28:54.187894  283599 addons.go:412] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0921 22:28:54.190631  283599 out.go:177] * Verifying Kubernetes components...
	I0921 22:28:54.187957  283599 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-different-port-20220921221118-10174"
	I0921 22:28:54.187964  283599 addons.go:65] Setting dashboard=true in profile "default-k8s-different-port-20220921221118-10174"
	I0921 22:28:54.187970  283599 addons.go:65] Setting default-storageclass=true in profile "default-k8s-different-port-20220921221118-10174"
	I0921 22:28:54.188002  283599 addons.go:65] Setting metrics-server=true in profile "default-k8s-different-port-20220921221118-10174"
	I0921 22:28:54.188076  283599 config.go:180] Loaded profile config "default-k8s-different-port-20220921221118-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.2
	I0921 22:28:54.192035  283599 addons.go:153] Setting addon storage-provisioner=true in "default-k8s-different-port-20220921221118-10174"
	W0921 22:28:54.192079  283599 addons.go:162] addon storage-provisioner should already be in state true
	I0921 22:28:54.192091  283599 addons.go:153] Setting addon dashboard=true in "default-k8s-different-port-20220921221118-10174"
	W0921 22:28:54.192114  283599 addons.go:162] addon dashboard should already be in state true
	I0921 22:28:54.192162  283599 host.go:66] Checking if "default-k8s-different-port-20220921221118-10174" exists ...
	I0921 22:28:54.192210  283599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0921 22:28:54.192299  283599 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20220921221118-10174"
	I0921 22:28:54.192580  283599 addons.go:153] Setting addon metrics-server=true in "default-k8s-different-port-20220921221118-10174"
	W0921 22:28:54.192616  283599 addons.go:162] addon metrics-server should already be in state true
	I0921 22:28:54.192633  283599 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221118-10174 --format={{.State.Status}}
	I0921 22:28:54.192666  283599 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221118-10174 --format={{.State.Status}}
	I0921 22:28:54.192163  283599 host.go:66] Checking if "default-k8s-different-port-20220921221118-10174" exists ...
	I0921 22:28:54.192666  283599 host.go:66] Checking if "default-k8s-different-port-20220921221118-10174" exists ...
	I0921 22:28:54.193362  283599 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221118-10174 --format={{.State.Status}}
	I0921 22:28:54.193439  283599 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221118-10174 --format={{.State.Status}}
	I0921 22:28:54.234974  283599 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0921 22:28:54.236667  283599 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0921 22:28:54.236692  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0921 22:28:54.236745  283599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:28:54.240000  283599 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.6.0
	I0921 22:28:54.239390  283599 addons.go:153] Setting addon default-storageclass=true in "default-k8s-different-port-20220921221118-10174"
	I0921 22:28:54.241874  283599 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0921 22:28:54.244335  283599 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0921 22:28:54.244363  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	W0921 22:28:54.241874  283599 addons.go:162] addon default-storageclass should already be in state true
	I0921 22:28:54.244424  283599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:28:54.244454  283599 host.go:66] Checking if "default-k8s-different-port-20220921221118-10174" exists ...
	I0921 22:28:54.244956  283599 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221118-10174 --format={{.State.Status}}
	I0921 22:28:54.246658  283599 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0921 22:28:54.248082  283599 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0921 22:28:54.248109  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0921 22:28:54.248165  283599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:28:54.272909  283599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49443 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa Username:docker}
	I0921 22:28:54.273873  283599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49443 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa Username:docker}
	I0921 22:28:54.277163  283599 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0921 22:28:54.277186  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0921 22:28:54.277236  283599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:28:54.290041  283599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49443 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa Username:docker}
	I0921 22:28:54.318706  283599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49443 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa Username:docker}
	I0921 22:28:54.398932  283599 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20220921221118-10174" to be "Ready" ...
	I0921 22:28:54.399014  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0921 22:28:54.496523  283599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0921 22:28:54.498431  283599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0921 22:28:54.499591  283599 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0921 22:28:54.499650  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0921 22:28:54.501640  283599 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0921 22:28:54.501663  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0921 22:28:54.594519  283599 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0921 22:28:54.594561  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0921 22:28:54.596768  283599 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0921 22:28:54.596847  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0921 22:28:54.690036  283599 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0921 22:28:54.690071  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0921 22:28:54.700119  283599 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0921 22:28:54.700197  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0921 22:28:54.876320  283599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0921 22:28:54.883544  283599 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0921 22:28:54.883571  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4206 bytes)
	I0921 22:28:54.977006  283599 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0921 22:28:54.977040  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0921 22:28:55.079240  283599 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0921 22:28:55.079273  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0921 22:28:55.176309  283599 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0921 22:28:55.176344  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0921 22:28:55.276282  283599 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0921 22:28:55.276317  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0921 22:28:55.379016  283599 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0921 22:28:55.379044  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0921 22:28:55.386242  283599 start.go:810] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS
	I0921 22:28:55.399129  283599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0921 22:28:55.595061  283599 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.098450009s)
	I0921 22:28:55.786581  283599 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.288109437s)
	I0921 22:28:56.081753  283599 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.205376891s)
	I0921 22:28:56.081804  283599 addons.go:383] Verifying addon metrics-server=true in "default-k8s-different-port-20220921221118-10174"
	I0921 22:28:56.387178  283599 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0921 22:28:56.388690  283599 addons.go:414] enableAddons completed in 2.200797183s
	I0921 22:28:56.404853  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:28:53.909716  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:55.910611  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:58.405031  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:00.405582  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:28:58.409630  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:00.410447  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:02.905572  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:05.405473  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:02.910338  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:05.410066  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:07.904364  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:09.905589  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:07.910279  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:10.410127  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:12.405034  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:14.905741  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:12.910452  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:15.410553  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:17.404952  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:19.405175  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:21.405392  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:17.910479  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:20.410559  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:23.405620  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:25.905592  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:22.909898  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:24.910567  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:27.905775  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:30.405483  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:27.410039  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:29.410131  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:31.410291  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:32.904863  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:35.404709  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:33.910459  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:36.410445  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:37.905690  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:40.405229  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:38.910532  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:41.409671  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:42.905360  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:44.905907  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:43.410422  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:45.910511  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:47.404631  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:49.405402  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:48.409951  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:50.410363  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:51.904997  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:53.905667  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:56.405228  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:52.411261  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:54.910318  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:58.405705  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:00.905348  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:57.409683  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:59.410335  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:30:03.404779  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:05.404833  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:01.909994  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:30:03.910230  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:30:06.410036  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:30:07.405804  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:09.904912  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:08.909550  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:30:10.910475  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:30:13.409889  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:30:14.413229  276511 node_ready.go:38] duration metric: took 4m0.009606009s waiting for node "no-preload-20220921220832-10174" to be "Ready" ...
	I0921 22:30:14.416209  276511 out.go:177] 
	W0921 22:30:14.417896  276511 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0921 22:30:14.417916  276511 out.go:239] * 
	W0921 22:30:14.418711  276511 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0921 22:30:14.420798  276511 out.go:177] 
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	fab8999ce76fe       d921cee849482       About a minute ago   Running             kindnet-cni               1                   c154261f9ef9c
	f637568210c7a       d921cee849482       4 minutes ago        Exited              kindnet-cni               0                   c154261f9ef9c
	8c26b3ec700f1       1c7d8c51823b5       4 minutes ago        Running             kube-proxy                0                   b9860d4aa1834
	a520a5b3d71d5       ca0ea1ee3cfd3       4 minutes ago        Running             kube-scheduler            2                   5d1b185924c31
	54979eccafeb5       a8a176a5d5d69       4 minutes ago        Running             etcd                      2                   3aeccdb1ccfbb
	fc632c61d18ce       dbfceb93c69b6       4 minutes ago        Running             kube-controller-manager   2                   6c963e60ffdaf
	fbe07ea9b6cd1       97801f8394908       4 minutes ago        Running             kube-apiserver            2                   0e8d68b117ca3
	
	* 
	* ==> containerd <==
	* -- Logs begin at Wed 2022-09-21 22:21:22 UTC, end at Wed 2022-09-21 22:30:15 UTC. --
	Sep 21 22:26:14 no-preload-20220921220832-10174 containerd[386]: time="2022-09-21T22:26:14.226079193Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 21 22:26:14 no-preload-20220921220832-10174 containerd[386]: time="2022-09-21T22:26:14.226094064Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 21 22:26:14 no-preload-20220921220832-10174 containerd[386]: time="2022-09-21T22:26:14.226407603Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b9860d4aa1834c48a96731f3c427eab857da87a74a08d083791b861c2bf09e91 pid=4248 runtime=io.containerd.runc.v2
	Sep 21 22:26:14 no-preload-20220921220832-10174 containerd[386]: time="2022-09-21T22:26:14.304520314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-52x7l,Uid:02722e58-72c9-4da6-b0a4-196c51acc99c,Namespace:kube-system,Attempt:0,} returns sandbox id \"b9860d4aa1834c48a96731f3c427eab857da87a74a08d083791b861c2bf09e91\""
	Sep 21 22:26:14 no-preload-20220921220832-10174 containerd[386]: time="2022-09-21T22:26:14.308206912Z" level=info msg="CreateContainer within sandbox \"b9860d4aa1834c48a96731f3c427eab857da87a74a08d083791b861c2bf09e91\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
	Sep 21 22:26:14 no-preload-20220921220832-10174 containerd[386]: time="2022-09-21T22:26:14.332255656Z" level=info msg="CreateContainer within sandbox \"b9860d4aa1834c48a96731f3c427eab857da87a74a08d083791b861c2bf09e91\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8c26b3ec700f1f2e31061bc3b5571524698489a128551f988929d8f40c0cd123\""
	Sep 21 22:26:14 no-preload-20220921220832-10174 containerd[386]: time="2022-09-21T22:26:14.333135041Z" level=info msg="StartContainer for \"8c26b3ec700f1f2e31061bc3b5571524698489a128551f988929d8f40c0cd123\""
	Sep 21 22:26:14 no-preload-20220921220832-10174 containerd[386]: time="2022-09-21T22:26:14.489023483Z" level=info msg="StartContainer for \"8c26b3ec700f1f2e31061bc3b5571524698489a128551f988929d8f40c0cd123\" returns successfully"
	Sep 21 22:26:14 no-preload-20220921220832-10174 containerd[386]: time="2022-09-21T22:26:14.501072647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-ww9rl,Uid:68c0d807-f3cb-4a87-8603-c99649d89553,Namespace:kube-system,Attempt:0,} returns sandbox id \"c154261f9ef9c866e85b894622699d9ca50bb1216088c5cd9e748fa81bbc51de\""
	Sep 21 22:26:14 no-preload-20220921220832-10174 containerd[386]: time="2022-09-21T22:26:14.504236171Z" level=info msg="CreateContainer within sandbox \"c154261f9ef9c866e85b894622699d9ca50bb1216088c5cd9e748fa81bbc51de\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:0,}"
	Sep 21 22:26:14 no-preload-20220921220832-10174 containerd[386]: time="2022-09-21T22:26:14.593415867Z" level=info msg="CreateContainer within sandbox \"c154261f9ef9c866e85b894622699d9ca50bb1216088c5cd9e748fa81bbc51de\" for &ContainerMetadata{Name:kindnet-cni,Attempt:0,} returns container id \"f637568210c7a2384ec1a6884bfbe8891208a46727d367fc9ddbb657dc488b1d\""
	Sep 21 22:26:14 no-preload-20220921220832-10174 containerd[386]: time="2022-09-21T22:26:14.594546928Z" level=info msg="StartContainer for \"f637568210c7a2384ec1a6884bfbe8891208a46727d367fc9ddbb657dc488b1d\""
	Sep 21 22:26:14 no-preload-20220921220832-10174 containerd[386]: time="2022-09-21T22:26:14.980043459Z" level=info msg="StartContainer for \"f637568210c7a2384ec1a6884bfbe8891208a46727d367fc9ddbb657dc488b1d\" returns successfully"
	Sep 21 22:27:01 no-preload-20220921220832-10174 containerd[386]: time="2022-09-21T22:27:01.003046999Z" level=error msg="ContainerStatus for \"d605c694eb565b146de41bf9aeb0e0d8611a29c6b87a941d9f2be2cf9633485e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d605c694eb565b146de41bf9aeb0e0d8611a29c6b87a941d9f2be2cf9633485e\": not found"
	Sep 21 22:27:01 no-preload-20220921220832-10174 containerd[386]: time="2022-09-21T22:27:01.003578472Z" level=error msg="ContainerStatus for \"f932754d512b70ba40cb017fa2d0d66011913d9205bb8a7e0854a2a4dd07c610\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f932754d512b70ba40cb017fa2d0d66011913d9205bb8a7e0854a2a4dd07c610\": not found"
	Sep 21 22:27:01 no-preload-20220921220832-10174 containerd[386]: time="2022-09-21T22:27:01.004043544Z" level=error msg="ContainerStatus for \"3eefbcb898b092a06a25d126ffaed982346f2b3691953b1cfaddc748fee80843\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3eefbcb898b092a06a25d126ffaed982346f2b3691953b1cfaddc748fee80843\": not found"
	Sep 21 22:27:01 no-preload-20220921220832-10174 containerd[386]: time="2022-09-21T22:27:01.004495804Z" level=error msg="ContainerStatus for \"d5f7b6ec947978c8f5d1591ffc19b813c1e04902996fcc519f0bb5a2a84a948d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d5f7b6ec947978c8f5d1591ffc19b813c1e04902996fcc519f0bb5a2a84a948d\": not found"
	Sep 21 22:28:55 no-preload-20220921220832-10174 containerd[386]: time="2022-09-21T22:28:55.532674654Z" level=info msg="shim disconnected" id=f637568210c7a2384ec1a6884bfbe8891208a46727d367fc9ddbb657dc488b1d
	Sep 21 22:28:55 no-preload-20220921220832-10174 containerd[386]: time="2022-09-21T22:28:55.532733752Z" level=warning msg="cleaning up after shim disconnected" id=f637568210c7a2384ec1a6884bfbe8891208a46727d367fc9ddbb657dc488b1d namespace=k8s.io
	Sep 21 22:28:55 no-preload-20220921220832-10174 containerd[386]: time="2022-09-21T22:28:55.532745287Z" level=info msg="cleaning up dead shim"
	Sep 21 22:28:55 no-preload-20220921220832-10174 containerd[386]: time="2022-09-21T22:28:55.542194269Z" level=warning msg="cleanup warnings time=\"2022-09-21T22:28:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4775 runtime=io.containerd.runc.v2\n"
	Sep 21 22:28:56 no-preload-20220921220832-10174 containerd[386]: time="2022-09-21T22:28:56.466077435Z" level=info msg="CreateContainer within sandbox \"c154261f9ef9c866e85b894622699d9ca50bb1216088c5cd9e748fa81bbc51de\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:1,}"
	Sep 21 22:28:56 no-preload-20220921220832-10174 containerd[386]: time="2022-09-21T22:28:56.479357814Z" level=info msg="CreateContainer within sandbox \"c154261f9ef9c866e85b894622699d9ca50bb1216088c5cd9e748fa81bbc51de\" for &ContainerMetadata{Name:kindnet-cni,Attempt:1,} returns container id \"fab8999ce76feeeff063c9d2ac345193f7ab2fc3e8c6e8111eb98766d74ff485\""
	Sep 21 22:28:56 no-preload-20220921220832-10174 containerd[386]: time="2022-09-21T22:28:56.479968101Z" level=info msg="StartContainer for \"fab8999ce76feeeff063c9d2ac345193f7ab2fc3e8c6e8111eb98766d74ff485\""
	Sep 21 22:28:56 no-preload-20220921220832-10174 containerd[386]: time="2022-09-21T22:28:56.680449700Z" level=info msg="StartContainer for \"fab8999ce76feeeff063c9d2ac345193f7ab2fc3e8c6e8111eb98766d74ff485\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-20220921220832-10174
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-20220921220832-10174
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=937c68716dfaac5b5ffa3b6655158d5d3472b8c4
	                    minikube.k8s.io/name=no-preload-20220921220832-10174
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_09_21T22_26_02_0700
	                    minikube.k8s.io/version=v1.27.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 21 Sep 2022 22:25:58 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-20220921220832-10174
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 21 Sep 2022 22:30:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 21 Sep 2022 22:26:11 +0000   Wed, 21 Sep 2022 22:25:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 21 Sep 2022 22:26:11 +0000   Wed, 21 Sep 2022 22:25:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 21 Sep 2022 22:26:11 +0000   Wed, 21 Sep 2022 22:25:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 21 Sep 2022 22:26:11 +0000   Wed, 21 Sep 2022 22:25:55 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-20220921220832-10174
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871748Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871748Ki
	  pods:               110
	System Info:
	  Machine ID:                 40026c506a9948afaae3b0fda0a61c83
	  System UUID:                44c6c62a-5061-4f07-a2f0-9d563da1b73e
	  Boot ID:                    878f3e16-9143-42f3-b848-1a740c7f26bf
	  Kernel Version:             5.15.0-1017-gcp
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.6.8
	  Kubelet Version:            v1.25.2
	  Kube-Proxy Version:         v1.25.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-no-preload-20220921220832-10174                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         4m14s
	  kube-system                 kindnet-ww9rl                                              100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m2s
	  kube-system                 kube-apiserver-no-preload-20220921220832-10174             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m14s
	  kube-system                 kube-controller-manager-no-preload-20220921220832-10174    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m14s
	  kube-system                 kube-proxy-52x7l                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m2s
	  kube-system                 kube-scheduler-no-preload-20220921220832-10174             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m                     kube-proxy       
	  Normal  NodeHasSufficientMemory  4m21s (x4 over 4m21s)  kubelet          Node no-preload-20220921220832-10174 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m21s (x4 over 4m21s)  kubelet          Node no-preload-20220921220832-10174 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m21s (x4 over 4m21s)  kubelet          Node no-preload-20220921220832-10174 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m15s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m14s                  kubelet          Node no-preload-20220921220832-10174 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m14s                  kubelet          Node no-preload-20220921220832-10174 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m14s                  kubelet          Node no-preload-20220921220832-10174 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m14s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m2s                   node-controller  Node no-preload-20220921220832-10174 event: Registered Node no-preload-20220921220832-10174 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000005] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +2.959863] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.003881] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.023897] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[Sep21 22:10] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.005087] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[Sep21 22:11] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +2.967845] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.031851] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.027935] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +2.943864] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.019893] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.023889] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	
	* 
	* ==> etcd [54979eccafeb56940caff5e4877cc59e8d00548c625c65f2549da307ec829506] <==
	* {"level":"info","ts":"2022-09-21T22:25:55.085Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 switched to configuration voters=(16125559238023404339)"}
	{"level":"info","ts":"2022-09-21T22:25:55.085Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","added-peer-id":"dfc97eb0aae75b33","added-peer-peer-urls":["https://192.168.94.2:2380"]}
	{"level":"info","ts":"2022-09-21T22:25:55.087Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-09-21T22:25:55.087Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2022-09-21T22:25:55.087Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2022-09-21T22:25:55.087Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"dfc97eb0aae75b33","initial-advertise-peer-urls":["https://192.168.94.2:2380"],"listen-peer-urls":["https://192.168.94.2:2380"],"advertise-client-urls":["https://192.168.94.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.94.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-09-21T22:25:55.087Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-09-21T22:25:55.576Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 is starting a new election at term 1"}
	{"level":"info","ts":"2022-09-21T22:25:55.576Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-09-21T22:25:55.576Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgPreVoteResp from dfc97eb0aae75b33 at term 1"}
	{"level":"info","ts":"2022-09-21T22:25:55.576Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became candidate at term 2"}
	{"level":"info","ts":"2022-09-21T22:25:55.576Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgVoteResp from dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2022-09-21T22:25:55.576Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became leader at term 2"}
	{"level":"info","ts":"2022-09-21T22:25:55.576Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2022-09-21T22:25:55.577Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-21T22:25:55.578Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-21T22:25:55.578Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-21T22:25:55.578Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-21T22:25:55.578Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:no-preload-20220921220832-10174 ClientURLs:[https://192.168.94.2:2379]}","request-path":"/0/members/dfc97eb0aae75b33/attributes","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2022-09-21T22:25:55.578Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-09-21T22:25:55.578Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-09-21T22:25:55.578Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-09-21T22:25:55.578Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-09-21T22:25:55.580Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-09-21T22:25:55.581Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.94.2:2379"}
	
	* 
	* ==> kernel <==
	*  22:30:15 up  1:12,  0 users,  load average: 0.38, 0.64, 1.29
	Linux no-preload-20220921220832-10174 5.15.0-1017-gcp #23~20.04.2-Ubuntu SMP Wed Aug 17 02:46:40 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kube-apiserver [fbe07ea9b6cd1b2387645030cac1d4cc68659f594af25721d8138cd4ce88e0cc] <==
	* I0921 22:26:13.878076       1 controller.go:616] quota admission added evaluator for: controllerrevisions.apps
	I0921 22:26:15.913241       1 alloc.go:327] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs=map[IPv4:10.96.211.162]
	I0921 22:26:16.688965       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.106.203.245]
	I0921 22:26:16.700971       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.103.246.5]
	W0921 22:26:16.778317       1 handler_proxy.go:105] no RequestInfo found in the context
	W0921 22:26:16.778322       1 handler_proxy.go:105] no RequestInfo found in the context
	E0921 22:26:16.778394       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0921 22:26:16.778402       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0921 22:26:16.778413       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0921 22:26:16.779528       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0921 22:27:16.779474       1 handler_proxy.go:105] no RequestInfo found in the context
	E0921 22:27:16.779519       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0921 22:27:16.779525       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0921 22:27:16.779641       1 handler_proxy.go:105] no RequestInfo found in the context
	E0921 22:27:16.779754       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0921 22:27:16.781587       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0921 22:29:16.780649       1 handler_proxy.go:105] no RequestInfo found in the context
	E0921 22:29:16.780696       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0921 22:29:16.780702       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0921 22:29:16.781720       1 handler_proxy.go:105] no RequestInfo found in the context
	E0921 22:29:16.781783       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0921 22:29:16.781795       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [fc632c61d18cec99e31615712601080a0a8d73d2a421dd3fb061f64331bf7d7c] <==
	* E0921 22:26:16.579307       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-54596f475f" failed with pods "kubernetes-dashboard-54596f475f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0921 22:26:16.579317       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7b94984548" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-7b94984548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0921 22:26:16.579341       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-54596f475f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-54596f475f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0921 22:26:16.582864       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-7b94984548" failed with pods "dashboard-metrics-scraper-7b94984548-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0921 22:26:16.582877       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7b94984548" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-7b94984548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0921 22:26:16.584910       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-54596f475f" failed with pods "kubernetes-dashboard-54596f475f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0921 22:26:16.584938       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-54596f475f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-54596f475f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0921 22:26:16.593955       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-54596f475f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-54596f475f-gh8ld"
	I0921 22:26:16.602939       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7b94984548" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-7b94984548-lsnrl"
	E0921 22:26:43.439692       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0921 22:26:43.796517       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0921 22:27:13.446266       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0921 22:27:13.813580       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0921 22:27:43.452567       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0921 22:27:43.823177       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0921 22:28:13.459108       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0921 22:28:13.834868       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0921 22:28:43.465143       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0921 22:28:43.845530       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0921 22:29:13.471579       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0921 22:29:13.855915       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0921 22:29:43.478831       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0921 22:29:43.866998       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0921 22:30:13.486886       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0921 22:30:13.877040       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [8c26b3ec700f1f2e31061bc3b5571524698489a128551f988929d8f40c0cd123] <==
	* I0921 22:26:14.785017       1 node.go:163] Successfully retrieved node IP: 192.168.94.2
	I0921 22:26:14.785128       1 server_others.go:138] "Detected node IP" address="192.168.94.2"
	I0921 22:26:14.785168       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0921 22:26:14.888322       1 server_others.go:206] "Using iptables Proxier"
	I0921 22:26:14.888381       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0921 22:26:14.888396       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0921 22:26:14.888420       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0921 22:26:14.888469       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0921 22:26:14.888613       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0921 22:26:14.888846       1 server.go:661] "Version info" version="v1.25.2"
	I0921 22:26:14.888858       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0921 22:26:14.889551       1 config.go:444] "Starting node config controller"
	I0921 22:26:14.889563       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0921 22:26:14.889861       1 config.go:317] "Starting service config controller"
	I0921 22:26:14.889874       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0921 22:26:14.889897       1 config.go:226] "Starting endpoint slice config controller"
	I0921 22:26:14.889901       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0921 22:26:14.989950       1 shared_informer.go:262] Caches are synced for node config
	I0921 22:26:14.990008       1 shared_informer.go:262] Caches are synced for service config
	I0921 22:26:14.990012       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [a520a5b3d71d5436376cb6ec2cc229690250107ed3a13462565666b39cd14a9f] <==
	* W0921 22:25:58.301074       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0921 22:25:58.301095       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0921 22:25:58.301162       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0921 22:25:58.301187       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0921 22:25:58.301245       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0921 22:25:58.301256       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0921 22:25:58.301266       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0921 22:25:58.301269       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0921 22:25:58.301326       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0921 22:25:58.301348       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0921 22:25:58.301355       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0921 22:25:58.301369       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0921 22:25:58.301392       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0921 22:25:58.301411       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0921 22:25:58.301416       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0921 22:25:58.301429       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0921 22:25:58.301487       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0921 22:25:58.301507       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0921 22:25:59.323662       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0921 22:25:59.323768       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0921 22:25:59.357767       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0921 22:25:59.357803       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0921 22:25:59.383080       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0921 22:25:59.383124       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0921 22:25:59.894726       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2022-09-21 22:21:22 UTC, end at Wed 2022-09-21 22:30:15 UTC. --
	Sep 21 22:28:16 no-preload-20220921220832-10174 kubelet[3866]: E0921 22:28:16.317343    3866 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:28:21 no-preload-20220921220832-10174 kubelet[3866]: E0921 22:28:21.318054    3866 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:28:26 no-preload-20220921220832-10174 kubelet[3866]: E0921 22:28:26.319686    3866 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:28:31 no-preload-20220921220832-10174 kubelet[3866]: E0921 22:28:31.321246    3866 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:28:36 no-preload-20220921220832-10174 kubelet[3866]: E0921 22:28:36.322830    3866 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:28:41 no-preload-20220921220832-10174 kubelet[3866]: E0921 22:28:41.324306    3866 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:28:46 no-preload-20220921220832-10174 kubelet[3866]: E0921 22:28:46.325884    3866 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:28:51 no-preload-20220921220832-10174 kubelet[3866]: E0921 22:28:51.327497    3866 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:28:56 no-preload-20220921220832-10174 kubelet[3866]: E0921 22:28:56.328869    3866 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:28:56 no-preload-20220921220832-10174 kubelet[3866]: I0921 22:28:56.463641    3866 scope.go:115] "RemoveContainer" containerID="f637568210c7a2384ec1a6884bfbe8891208a46727d367fc9ddbb657dc488b1d"
	Sep 21 22:29:01 no-preload-20220921220832-10174 kubelet[3866]: E0921 22:29:01.330198    3866 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:29:06 no-preload-20220921220832-10174 kubelet[3866]: E0921 22:29:06.331216    3866 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:29:11 no-preload-20220921220832-10174 kubelet[3866]: E0921 22:29:11.332371    3866 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:29:16 no-preload-20220921220832-10174 kubelet[3866]: E0921 22:29:16.333053    3866 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:29:21 no-preload-20220921220832-10174 kubelet[3866]: E0921 22:29:21.334605    3866 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:29:26 no-preload-20220921220832-10174 kubelet[3866]: E0921 22:29:26.335651    3866 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:29:31 no-preload-20220921220832-10174 kubelet[3866]: E0921 22:29:31.336957    3866 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:29:36 no-preload-20220921220832-10174 kubelet[3866]: E0921 22:29:36.338105    3866 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:29:41 no-preload-20220921220832-10174 kubelet[3866]: E0921 22:29:41.339065    3866 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:29:46 no-preload-20220921220832-10174 kubelet[3866]: E0921 22:29:46.340305    3866 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:29:51 no-preload-20220921220832-10174 kubelet[3866]: E0921 22:29:51.341889    3866 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:29:56 no-preload-20220921220832-10174 kubelet[3866]: E0921 22:29:56.343266    3866 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:30:01 no-preload-20220921220832-10174 kubelet[3866]: E0921 22:30:01.344854    3866 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:30:06 no-preload-20220921220832-10174 kubelet[3866]: E0921 22:30:06.345628    3866 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:30:11 no-preload-20220921220832-10174 kubelet[3866]: E0921 22:30:11.347308    3866 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20220921220832-10174 -n no-preload-20220921220832-10174
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-20220921220832-10174 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: coredns-565d847f94-86pzk metrics-server-5c8fd5cf8-qrk4q storage-provisioner dashboard-metrics-scraper-7b94984548-lsnrl kubernetes-dashboard-54596f475f-gh8ld
helpers_test.go:272: ======> post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context no-preload-20220921220832-10174 describe pod coredns-565d847f94-86pzk metrics-server-5c8fd5cf8-qrk4q storage-provisioner dashboard-metrics-scraper-7b94984548-lsnrl kubernetes-dashboard-54596f475f-gh8ld
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context no-preload-20220921220832-10174 describe pod coredns-565d847f94-86pzk metrics-server-5c8fd5cf8-qrk4q storage-provisioner dashboard-metrics-scraper-7b94984548-lsnrl kubernetes-dashboard-54596f475f-gh8ld: exit status 1 (62.212428ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-565d847f94-86pzk" not found
	Error from server (NotFound): pods "metrics-server-5c8fd5cf8-qrk4q" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-7b94984548-lsnrl" not found
	Error from server (NotFound): pods "kubernetes-dashboard-54596f475f-gh8ld" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context no-preload-20220921220832-10174 describe pod coredns-565d847f94-86pzk metrics-server-5c8fd5cf8-qrk4q storage-provisioner dashboard-metrics-scraper-7b94984548-lsnrl kubernetes-dashboard-54596f475f-gh8ld: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (534.79s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/SecondStart (534.8s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-different-port-20220921221118-10174 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.2
E0921 22:24:20.481396   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/functional-20220921213235-10174/client.crt: no such file or directory
E0921 22:24:21.247146   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/old-k8s-version-20220921220722-10174/client.crt: no such file or directory
E0921 22:24:27.010164   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/auto-20220921215523-10174/client.crt: no such file or directory
E0921 22:24:41.650406   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/kindnet-20220921215523-10174/client.crt: no such file or directory
E0921 22:24:48.930010   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/old-k8s-version-20220921220722-10174/client.crt: no such file or directory
E0921 22:24:51.495414   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/addons-20220921212740-10174/client.crt: no such file or directory
E0921 22:24:58.904955   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/enable-default-cni-20220921215523-10174/client.crt: no such file or directory
E0921 22:25:08.448497   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/addons-20220921212740-10174/client.crt: no such file or directory
E0921 22:26:02.147119   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/cilium-20220921215524-10174/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p default-k8s-different-port-20220921221118-10174 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.2: exit status 80 (8m52.788545109s)

                                                
                                                
-- stdout --
	* [default-k8s-different-port-20220921221118-10174] minikube v1.27.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14995
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on existing profile
	* Starting control plane node default-k8s-different-port-20220921221118-10174 in cluster default-k8s-different-port-20220921221118-10174
	* Pulling base image ...
	* Restarting existing docker container for "default-k8s-different-port-20220921221118-10174" ...
	* Preparing Kubernetes v1.25.2 on containerd 1.6.8 ...
	* Configuring CNI (Container Networking Interface) ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image docker.io/kubernetesui/dashboard:v2.6.0
	  - Using image k8s.gcr.io/echoserver:1.4
	  - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	* Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0921 22:24:01.692796  283599 out.go:296] Setting OutFile to fd 1 ...
	I0921 22:24:01.693211  283599 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:24:01.693232  283599 out.go:309] Setting ErrFile to fd 2...
	I0921 22:24:01.693240  283599 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:24:01.693504  283599 root.go:333] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/bin
	I0921 22:24:01.694665  283599 out.go:303] Setting JSON to false
	I0921 22:24:01.696140  283599 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3993,"bootTime":1663795049,"procs":467,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1017-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0921 22:24:01.696247  283599 start.go:125] virtualization: kvm guest
	I0921 22:24:01.698874  283599 out.go:177] * [default-k8s-different-port-20220921221118-10174] minikube v1.27.0 on Ubuntu 20.04 (kvm/amd64)
	I0921 22:24:01.701214  283599 out.go:177]   - MINIKUBE_LOCATION=14995
	I0921 22:24:01.701128  283599 notify.go:214] Checking for updates...
	I0921 22:24:01.703092  283599 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0921 22:24:01.704791  283599 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
	I0921 22:24:01.706544  283599 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube
	I0921 22:24:01.708172  283599 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0921 22:24:01.710349  283599 config.go:180] Loaded profile config "default-k8s-different-port-20220921221118-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.2
	I0921 22:24:01.710930  283599 driver.go:365] Setting default libvirt URI to qemu:///system
	I0921 22:24:01.744026  283599 docker.go:137] docker version: linux-20.10.18
	I0921 22:24:01.744136  283599 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:24:01.840732  283599 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:true NGoroutines:44 SystemTime:2022-09-21 22:24:01.764457724 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1017-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.18 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0921 22:24:01.840851  283599 docker.go:254] overlay module found
	I0921 22:24:01.843051  283599 out.go:177] * Using the docker driver based on existing profile
	I0921 22:24:01.844347  283599 start.go:284] selected driver: docker
	I0921 22:24:01.844371  283599 start.go:808] validating driver "docker" against &{Name:default-k8s-different-port-20220921221118-10174 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:default-k8s-different-port-20220921221118-10174 Na
mespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountSt
ring:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 22:24:01.844475  283599 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0921 22:24:01.845300  283599 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:24:01.940944  283599 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:true NGoroutines:44 SystemTime:2022-09-21 22:24:01.86716064 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1017-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.18 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientI
nfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0921 22:24:01.941199  283599 start_flags.go:867] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0921 22:24:01.941223  283599 cni.go:95] Creating CNI manager for ""
	I0921 22:24:01.941231  283599 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0921 22:24:01.941249  283599 start_flags.go:316] config:
	{Name:default-k8s-different-port-20220921221118-10174 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:default-k8s-different-port-20220921221118-10174 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDo
main:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 22:24:01.944240  283599 out.go:177] * Starting control plane node default-k8s-different-port-20220921221118-10174 in cluster default-k8s-different-port-20220921221118-10174
	I0921 22:24:01.945596  283599 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0921 22:24:01.946905  283599 out.go:177] * Pulling base image ...
	I0921 22:24:01.948255  283599 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime containerd
	I0921 22:24:01.948306  283599 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.2-containerd-overlay2-amd64.tar.lz4
	I0921 22:24:01.948321  283599 cache.go:57] Caching tarball of preloaded images
	I0921 22:24:01.948361  283599 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	I0921 22:24:01.948572  283599 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0921 22:24:01.948588  283599 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.2 on containerd
	I0921 22:24:01.948702  283599 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/config.json ...
	I0921 22:24:01.976413  283599 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon, skipping pull
	I0921 22:24:01.976445  283599 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c exists in daemon, skipping load
	I0921 22:24:01.976457  283599 cache.go:208] Successfully downloaded all kic artifacts
	I0921 22:24:01.976502  283599 start.go:364] acquiring machines lock for default-k8s-different-port-20220921221118-10174: {Name:mk6a2906d520bc1db61074ef435cf249d094e940 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:24:01.976622  283599 start.go:368] acquired machines lock for "default-k8s-different-port-20220921221118-10174" in 78.111µs
	I0921 22:24:01.976652  283599 start.go:96] Skipping create...Using existing machine configuration
	I0921 22:24:01.976660  283599 fix.go:55] fixHost starting: 
	I0921 22:24:01.976899  283599 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221118-10174 --format={{.State.Status}}
	I0921 22:24:02.002084  283599 fix.go:103] recreateIfNeeded on default-k8s-different-port-20220921221118-10174: state=Stopped err=<nil>
	W0921 22:24:02.002122  283599 fix.go:129] unexpected machine state, will restart: <nil>
	I0921 22:24:02.004632  283599 out.go:177] * Restarting existing docker container for "default-k8s-different-port-20220921221118-10174" ...
	I0921 22:24:02.006307  283599 cli_runner.go:164] Run: docker start default-k8s-different-port-20220921221118-10174
	I0921 22:24:02.358108  283599 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221118-10174 --format={{.State.Status}}
	I0921 22:24:02.385298  283599 kic.go:415] container "default-k8s-different-port-20220921221118-10174" state is running.
	I0921 22:24:02.385684  283599 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220921221118-10174
	I0921 22:24:02.412757  283599 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/config.json ...
	I0921 22:24:02.412997  283599 machine.go:88] provisioning docker machine ...
	I0921 22:24:02.413031  283599 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20220921221118-10174"
	I0921 22:24:02.413108  283599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:24:02.438229  283599 main.go:134] libmachine: Using SSH client type: native
	I0921 22:24:02.438400  283599 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ec520] 0x7ef6a0 <nil>  [] 0s} 127.0.0.1 49443 <nil> <nil>}
	I0921 22:24:02.438416  283599 main.go:134] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20220921221118-10174 && echo "default-k8s-different-port-20220921221118-10174" | sudo tee /etc/hostname
	I0921 22:24:02.439038  283599 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34230->127.0.0.1:49443: read: connection reset by peer
	I0921 22:24:05.584682  283599 main.go:134] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20220921221118-10174
	
	I0921 22:24:05.584766  283599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:24:05.608825  283599 main.go:134] libmachine: Using SSH client type: native
	I0921 22:24:05.609026  283599 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ec520] 0x7ef6a0 <nil>  [] 0s} 127.0.0.1 49443 <nil> <nil>}
	I0921 22:24:05.609059  283599 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20220921221118-10174' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20220921221118-10174/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20220921221118-10174' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0921 22:24:05.739656  283599 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0921 22:24:05.739694  283599 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube}
	I0921 22:24:05.739749  283599 ubuntu.go:177] setting up certificates
	I0921 22:24:05.739765  283599 provision.go:83] configureAuth start
	I0921 22:24:05.739824  283599 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220921221118-10174
	I0921 22:24:05.764789  283599 provision.go:138] copyHostCerts
	I0921 22:24:05.764839  283599 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.pem, removing ...
	I0921 22:24:05.764846  283599 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.pem
	I0921 22:24:05.764904  283599 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.pem (1078 bytes)
	I0921 22:24:05.764993  283599 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cert.pem, removing ...
	I0921 22:24:05.765005  283599 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cert.pem
	I0921 22:24:05.765028  283599 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cert.pem (1123 bytes)
	I0921 22:24:05.765086  283599 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/key.pem, removing ...
	I0921 22:24:05.765095  283599 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/key.pem
	I0921 22:24:05.765118  283599 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/key.pem (1675 bytes)
	I0921 22:24:05.765169  283599 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20220921221118-10174 san=[192.168.85.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20220921221118-10174]
	I0921 22:24:05.914466  283599 provision.go:172] copyRemoteCerts
	I0921 22:24:05.914534  283599 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0921 22:24:05.914564  283599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:24:05.939805  283599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49443 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa Username:docker}
	I0921 22:24:06.031315  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0921 22:24:06.048618  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server.pem --> /etc/docker/server.pem (1310 bytes)
	I0921 22:24:06.065530  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0921 22:24:06.083800  283599 provision.go:86] duration metric: configureAuth took 344.021748ms
	I0921 22:24:06.083828  283599 ubuntu.go:193] setting minikube options for container-runtime
	I0921 22:24:06.083988  283599 config.go:180] Loaded profile config "default-k8s-different-port-20220921221118-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.2
	I0921 22:24:06.083999  283599 machine.go:91] provisioned docker machine in 3.670987023s
	I0921 22:24:06.084006  283599 start.go:300] post-start starting for "default-k8s-different-port-20220921221118-10174" (driver="docker")
	I0921 22:24:06.084012  283599 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0921 22:24:06.084049  283599 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0921 22:24:06.084088  283599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:24:06.108286  283599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49443 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa Username:docker}
	I0921 22:24:06.203139  283599 ssh_runner.go:195] Run: cat /etc/os-release
	I0921 22:24:06.205811  283599 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0921 22:24:06.205839  283599 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0921 22:24:06.205852  283599 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0921 22:24:06.205864  283599 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0921 22:24:06.205880  283599 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/addons for local assets ...
	I0921 22:24:06.205944  283599 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files for local assets ...
	I0921 22:24:06.206037  283599 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/101742.pem -> 101742.pem in /etc/ssl/certs
	I0921 22:24:06.206142  283599 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0921 22:24:06.212569  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/101742.pem --> /etc/ssl/certs/101742.pem (1708 bytes)
	I0921 22:24:06.229418  283599 start.go:303] post-start completed in 145.398445ms
	I0921 22:24:06.229483  283599 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:24:06.229517  283599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:24:06.253305  283599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49443 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa Username:docker}
	I0921 22:24:06.340119  283599 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:24:06.344050  283599 fix.go:57] fixHost completed within 4.367385464s
	I0921 22:24:06.344071  283599 start.go:83] releasing machines lock for "default-k8s-different-port-20220921221118-10174", held for 4.367430848s
	I0921 22:24:06.344157  283599 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220921221118-10174
	I0921 22:24:06.368445  283599 ssh_runner.go:195] Run: systemctl --version
	I0921 22:24:06.368501  283599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:24:06.368505  283599 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0921 22:24:06.368550  283599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:24:06.394444  283599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49443 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa Username:docker}
	I0921 22:24:06.396066  283599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49443 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa Username:docker}
	I0921 22:24:06.513229  283599 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0921 22:24:06.524587  283599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0921 22:24:06.533746  283599 docker.go:188] disabling docker service ...
	I0921 22:24:06.533795  283599 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0921 22:24:06.543075  283599 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0921 22:24:06.551813  283599 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0921 22:24:06.629483  283599 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0921 22:24:06.707030  283599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0921 22:24:06.717244  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0921 22:24:06.729638  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "registry.k8s.io/pause:3.8"|' -i /etc/containerd/config.toml"
	I0921 22:24:06.737194  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0921 22:24:06.744928  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0921 22:24:06.752650  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0921 22:24:06.760419  283599 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0921 22:24:06.766584  283599 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0921 22:24:06.772903  283599 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0921 22:24:06.844578  283599 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0921 22:24:06.917291  283599 start.go:450] Will wait 60s for socket path /run/containerd/containerd.sock
	I0921 22:24:06.917353  283599 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0921 22:24:06.921118  283599 start.go:471] Will wait 60s for crictl version
	I0921 22:24:06.921184  283599 ssh_runner.go:195] Run: sudo crictl version
	I0921 22:24:06.948257  283599 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-09-21T22:24:06Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0921 22:24:17.995620  283599 ssh_runner.go:195] Run: sudo crictl version
	I0921 22:24:18.018705  283599 start.go:480] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.8
	RuntimeApiVersion:  v1alpha2
	I0921 22:24:18.018768  283599 ssh_runner.go:195] Run: containerd --version
	I0921 22:24:18.047337  283599 ssh_runner.go:195] Run: containerd --version
	I0921 22:24:18.078051  283599 out.go:177] * Preparing Kubernetes v1.25.2 on containerd 1.6.8 ...
	I0921 22:24:18.079491  283599 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220921221118-10174 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 22:24:18.103308  283599 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0921 22:24:18.106553  283599 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0921 22:24:18.115993  283599 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime containerd
	I0921 22:24:18.116056  283599 ssh_runner.go:195] Run: sudo crictl images --output json
	I0921 22:24:18.139896  283599 containerd.go:553] all images are preloaded for containerd runtime.
	I0921 22:24:18.139921  283599 containerd.go:467] Images already preloaded, skipping extraction
	I0921 22:24:18.139964  283599 ssh_runner.go:195] Run: sudo crictl images --output json
	I0921 22:24:18.163323  283599 containerd.go:553] all images are preloaded for containerd runtime.
	I0921 22:24:18.163344  283599 cache_images.go:84] Images are preloaded, skipping loading
	I0921 22:24:18.163382  283599 ssh_runner.go:195] Run: sudo crictl info
	I0921 22:24:18.186911  283599 cni.go:95] Creating CNI manager for ""
	I0921 22:24:18.186935  283599 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0921 22:24:18.186948  283599 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0921 22:24:18.186961  283599 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.25.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20220921221118-10174 NodeName:default-k8s-different-port-20220921221118-10174 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgr
oupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
	I0921 22:24:18.187074  283599 kubeadm.go:161] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "default-k8s-different-port-20220921221118-10174"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0921 22:24:18.187152  283599 kubeadm.go:962] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=default-k8s-different-port-20220921221118-10174 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.2 ClusterName:default-k8s-different-port-20220921221118-10174 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0921 22:24:18.187196  283599 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.2
	I0921 22:24:18.194012  283599 binaries.go:44] Found k8s binaries, skipping transfer
	I0921 22:24:18.194081  283599 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0921 22:24:18.200606  283599 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (540 bytes)
	I0921 22:24:18.212899  283599 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0921 22:24:18.224754  283599 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2076 bytes)
	I0921 22:24:18.236775  283599 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0921 22:24:18.239439  283599 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0921 22:24:18.248263  283599 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174 for IP: 192.168.85.2
	I0921 22:24:18.248377  283599 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.key
	I0921 22:24:18.248421  283599 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/proxy-client-ca.key
	I0921 22:24:18.248485  283599 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/client.key
	I0921 22:24:18.248538  283599 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/apiserver.key.43b9df8c
	I0921 22:24:18.248575  283599 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/proxy-client.key
	I0921 22:24:18.248658  283599 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/10174.pem (1338 bytes)
	W0921 22:24:18.248689  283599 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/10174_empty.pem, impossibly tiny 0 bytes
	I0921 22:24:18.248705  283599 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca-key.pem (1679 bytes)
	I0921 22:24:18.248729  283599 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem (1078 bytes)
	I0921 22:24:18.248758  283599 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/cert.pem (1123 bytes)
	I0921 22:24:18.248780  283599 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/key.pem (1675 bytes)
	I0921 22:24:18.248846  283599 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/101742.pem (1708 bytes)
	I0921 22:24:18.249439  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0921 22:24:18.265894  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0921 22:24:18.282128  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0921 22:24:18.298690  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0921 22:24:18.315323  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0921 22:24:18.331842  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0921 22:24:18.348196  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0921 22:24:18.364368  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0921 22:24:18.380401  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0921 22:24:18.396696  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/10174.pem --> /usr/share/ca-certificates/10174.pem (1338 bytes)
	I0921 22:24:18.413238  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/101742.pem --> /usr/share/ca-certificates/101742.pem (1708 bytes)
	I0921 22:24:18.429482  283599 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0921 22:24:18.441654  283599 ssh_runner.go:195] Run: openssl version
	I0921 22:24:18.446184  283599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0921 22:24:18.453215  283599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0921 22:24:18.456119  283599 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Sep 21 21:28 /usr/share/ca-certificates/minikubeCA.pem
	I0921 22:24:18.456166  283599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0921 22:24:18.460690  283599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0921 22:24:18.467196  283599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10174.pem && ln -fs /usr/share/ca-certificates/10174.pem /etc/ssl/certs/10174.pem"
	I0921 22:24:18.474449  283599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10174.pem
	I0921 22:24:18.477401  283599 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Sep 21 21:32 /usr/share/ca-certificates/10174.pem
	I0921 22:24:18.477445  283599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10174.pem
	I0921 22:24:18.481956  283599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10174.pem /etc/ssl/certs/51391683.0"
	I0921 22:24:18.488418  283599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/101742.pem && ln -fs /usr/share/ca-certificates/101742.pem /etc/ssl/certs/101742.pem"
	I0921 22:24:18.495604  283599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/101742.pem
	I0921 22:24:18.498556  283599 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Sep 21 21:32 /usr/share/ca-certificates/101742.pem
	I0921 22:24:18.498600  283599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/101742.pem
	I0921 22:24:18.503245  283599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/101742.pem /etc/ssl/certs/3ec20f2e.0"
	I0921 22:24:18.509856  283599 kubeadm.go:396] StartCluster: {Name:default-k8s-different-port-20220921221118-10174 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:default-k8s-different-port-20220921221118-10174 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/
minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 22:24:18.509953  283599 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0921 22:24:18.509985  283599 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0921 22:24:18.533346  283599 cri.go:87] found id: "1058c41aafbb87ce5fc70eca7e064de03a01eb48e2da027b4515730997b03de5"
	I0921 22:24:18.533375  283599 cri.go:87] found id: "e1b3d54125fe2aff15e0a996ae0be62597db4065ff9a9d41a3b0e598b97b1608"
	I0921 22:24:18.533382  283599 cri.go:87] found id: "2654f64b12dee801006bf0c4b82dc6839422d58571d4bc74cee43fe7fdfe9b01"
	I0921 22:24:18.533388  283599 cri.go:87] found id: "1bb50adc50b7e339b9e7fee763126f6f17a621dcc78637bab78187b50e68bbf2"
	I0921 22:24:18.533393  283599 cri.go:87] found id: "9e931b83ea689a63f7a3861c1e0f7076f722e68890221c6209b05b3b852c12d7"
	I0921 22:24:18.533402  283599 cri.go:87] found id: "8a3d458869b0951d2fe3dd93e9d05a549b058f95f84b2ad0685292ebb3428767"
	I0921 22:24:18.533407  283599 cri.go:87] found id: ""
	I0921 22:24:18.533444  283599 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0921 22:24:18.545553  283599 cri.go:114] JSON = null
	W0921 22:24:18.545605  283599 kubeadm.go:403] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I0921 22:24:18.545686  283599 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0921 22:24:18.552635  283599 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I0921 22:24:18.552664  283599 kubeadm.go:627] restartCluster start
	I0921 22:24:18.552705  283599 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0921 22:24:18.558944  283599 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:18.559817  283599 kubeconfig.go:116] verify returned: extract IP: "default-k8s-different-port-20220921221118-10174" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
	I0921 22:24:18.560296  283599 kubeconfig.go:127] "default-k8s-different-port-20220921221118-10174" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig - will repair!
	I0921 22:24:18.561146  283599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig: {Name:mkdec5faf1bb182b50a3cd5458b352f38075dc15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:24:18.562655  283599 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0921 22:24:18.568841  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:18.568884  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:18.576584  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:18.776932  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:18.777023  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:18.786228  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:18.977461  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:18.977542  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:18.986186  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:19.177398  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:19.177487  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:19.186159  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:19.377453  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:19.377534  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:19.385921  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:19.577206  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:19.577296  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:19.586370  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:19.777572  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:19.777676  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:19.786797  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:19.977103  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:19.977188  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:19.985822  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:20.177132  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:20.177234  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:20.185876  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:20.377187  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:20.377298  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:20.386086  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:20.577399  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:20.577488  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:20.586142  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:20.777447  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:20.777527  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:20.786547  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:20.976769  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:20.976865  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:20.985682  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:21.176870  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:21.176951  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:21.185811  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:21.377116  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:21.377184  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:21.385829  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:21.577109  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:21.577202  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:21.585911  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:21.585933  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:21.585979  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:21.593866  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:21.593893  283599 kubeadm.go:602] needs reconfigure: apiserver error: timed out waiting for the condition
	I0921 22:24:21.593899  283599 kubeadm.go:1114] stopping kube-system containers ...
	I0921 22:24:21.593908  283599 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0921 22:24:21.593964  283599 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0921 22:24:21.618017  283599 cri.go:87] found id: "1058c41aafbb87ce5fc70eca7e064de03a01eb48e2da027b4515730997b03de5"
	I0921 22:24:21.618041  283599 cri.go:87] found id: "e1b3d54125fe2aff15e0a996ae0be62597db4065ff9a9d41a3b0e598b97b1608"
	I0921 22:24:21.618048  283599 cri.go:87] found id: "2654f64b12dee801006bf0c4b82dc6839422d58571d4bc74cee43fe7fdfe9b01"
	I0921 22:24:21.618058  283599 cri.go:87] found id: "1bb50adc50b7e339b9e7fee763126f6f17a621dcc78637bab78187b50e68bbf2"
	I0921 22:24:21.618064  283599 cri.go:87] found id: "9e931b83ea689a63f7a3861c1e0f7076f722e68890221c6209b05b3b852c12d7"
	I0921 22:24:21.618072  283599 cri.go:87] found id: "8a3d458869b0951d2fe3dd93e9d05a549b058f95f84b2ad0685292ebb3428767"
	I0921 22:24:21.618078  283599 cri.go:87] found id: ""
	I0921 22:24:21.618082  283599 cri.go:232] Stopping containers: [1058c41aafbb87ce5fc70eca7e064de03a01eb48e2da027b4515730997b03de5 e1b3d54125fe2aff15e0a996ae0be62597db4065ff9a9d41a3b0e598b97b1608 2654f64b12dee801006bf0c4b82dc6839422d58571d4bc74cee43fe7fdfe9b01 1bb50adc50b7e339b9e7fee763126f6f17a621dcc78637bab78187b50e68bbf2 9e931b83ea689a63f7a3861c1e0f7076f722e68890221c6209b05b3b852c12d7 8a3d458869b0951d2fe3dd93e9d05a549b058f95f84b2ad0685292ebb3428767]
	I0921 22:24:21.618118  283599 ssh_runner.go:195] Run: which crictl
	I0921 22:24:21.621347  283599 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop 1058c41aafbb87ce5fc70eca7e064de03a01eb48e2da027b4515730997b03de5 e1b3d54125fe2aff15e0a996ae0be62597db4065ff9a9d41a3b0e598b97b1608 2654f64b12dee801006bf0c4b82dc6839422d58571d4bc74cee43fe7fdfe9b01 1bb50adc50b7e339b9e7fee763126f6f17a621dcc78637bab78187b50e68bbf2 9e931b83ea689a63f7a3861c1e0f7076f722e68890221c6209b05b3b852c12d7 8a3d458869b0951d2fe3dd93e9d05a549b058f95f84b2ad0685292ebb3428767
	I0921 22:24:21.645531  283599 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0921 22:24:21.655622  283599 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0921 22:24:21.662408  283599 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Sep 21 22:11 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Sep 21 22:11 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2127 Sep 21 22:11 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Sep 21 22:11 /etc/kubernetes/scheduler.conf
	
	I0921 22:24:21.662459  283599 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0921 22:24:21.669029  283599 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0921 22:24:21.675699  283599 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0921 22:24:21.682316  283599 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:21.682358  283599 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0921 22:24:21.688501  283599 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0921 22:24:21.694928  283599 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:21.696684  283599 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0921 22:24:21.703329  283599 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0921 22:24:21.710109  283599 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0921 22:24:21.710132  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0921 22:24:21.757457  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0921 22:24:22.810948  283599 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.053458682s)
	I0921 22:24:22.810976  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0921 22:24:22.943243  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0921 22:24:22.995873  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0921 22:24:23.097694  283599 api_server.go:51] waiting for apiserver process to appear ...
	I0921 22:24:23.097766  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 22:24:23.608210  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 22:24:24.107567  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 22:24:24.187217  283599 api_server.go:71] duration metric: took 1.089523123s to wait for apiserver process to appear ...
	I0921 22:24:24.187296  283599 api_server.go:87] waiting for apiserver healthz status ...
	I0921 22:24:24.187323  283599 api_server.go:240] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I0921 22:24:24.187688  283599 api_server.go:256] stopped: https://192.168.85.2:8444/healthz: Get "https://192.168.85.2:8444/healthz": dial tcp 192.168.85.2:8444: connect: connection refused
	I0921 22:24:24.688449  283599 api_server.go:240] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I0921 22:24:27.592182  283599 api_server.go:266] https://192.168.85.2:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0921 22:24:27.592315  283599 api_server.go:102] status: https://192.168.85.2:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0921 22:24:27.688579  283599 api_server.go:240] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I0921 22:24:27.694601  283599 api_server.go:266] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0921 22:24:27.694667  283599 api_server.go:102] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0921 22:24:28.187832  283599 api_server.go:240] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I0921 22:24:28.192979  283599 api_server.go:266] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0921 22:24:28.193004  283599 api_server.go:102] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0921 22:24:28.688623  283599 api_server.go:240] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I0921 22:24:28.695172  283599 api_server.go:266] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0921 22:24:28.695285  283599 api_server.go:102] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0921 22:24:29.187841  283599 api_server.go:240] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I0921 22:24:29.193157  283599 api_server.go:266] https://192.168.85.2:8444/healthz returned 200:
	ok
	I0921 22:24:29.198775  283599 api_server.go:140] control plane version: v1.25.2
	I0921 22:24:29.198796  283599 api_server.go:130] duration metric: took 5.011488882s to wait for apiserver health ...
	I0921 22:24:29.198805  283599 cni.go:95] Creating CNI manager for ""
	I0921 22:24:29.198812  283599 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0921 22:24:29.201314  283599 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0921 22:24:29.202798  283599 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0921 22:24:29.206616  283599 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.2/kubectl ...
	I0921 22:24:29.206636  283599 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0921 22:24:29.221913  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0921 22:24:29.826767  283599 system_pods.go:43] waiting for kube-system pods to appear ...
	I0921 22:24:29.834488  283599 system_pods.go:59] 9 kube-system pods found
	I0921 22:24:29.834517  283599 system_pods.go:61] "coredns-565d847f94-mrkjn" [7f364c47-74ce-4271-aab1-67bba320c586] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0921 22:24:29.834528  283599 system_pods.go:61] "etcd-default-k8s-different-port-20220921221118-10174" [8f0f58a7-7eae-43db-840f-bde95464e94e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0921 22:24:29.834533  283599 system_pods.go:61] "kindnet-7wbpp" [3f16ae0b-2f66-4f1e-b234-74570472a7f8] Running
	I0921 22:24:29.834539  283599 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220921221118-10174" [3a935d6b-ca77-4bcb-ae19-0a2af77c12a1] Running
	I0921 22:24:29.834544  283599 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220921221118-10174" [d01ee91a-5587-48e9-a235-68a73d5fedef] Running
	I0921 22:24:29.834549  283599 system_pods.go:61] "kube-proxy-lzphc" [611dbd37-0771-41b2-b886-93f46d79f802] Running
	I0921 22:24:29.834554  283599 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220921221118-10174" [998713da-f133-43f7-9f11-c6110ad66c8d] Running
	I0921 22:24:29.834561  283599 system_pods.go:61] "metrics-server-5c8fd5cf8-sshzh" [5972fae5-09c2-4e2e-b609-ef85f72311e4] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0921 22:24:29.834572  283599 system_pods.go:61] "storage-provisioner" [ca16dea1-fb3d-4cc1-b449-2236aefcc627] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0921 22:24:29.834577  283599 system_pods.go:74] duration metric: took 7.786123ms to wait for pod list to return data ...
	I0921 22:24:29.834588  283599 node_conditions.go:102] verifying NodePressure condition ...
	I0921 22:24:29.837059  283599 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0921 22:24:29.837085  283599 node_conditions.go:123] node cpu capacity is 8
	I0921 22:24:29.837096  283599 node_conditions.go:105] duration metric: took 2.500371ms to run NodePressure ...
	I0921 22:24:29.837121  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0921 22:24:30.025715  283599 kubeadm.go:763] waiting for restarted kubelet to initialise ...
	I0921 22:24:30.029542  283599 kubeadm.go:778] kubelet initialised
	I0921 22:24:30.029565  283599 kubeadm.go:779] duration metric: took 3.826857ms waiting for restarted kubelet to initialise ...
	I0921 22:24:30.029572  283599 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0921 22:24:30.034316  283599 pod_ready.go:78] waiting up to 4m0s for pod "coredns-565d847f94-mrkjn" in "kube-system" namespace to be "Ready" ...
	I0921 22:24:32.039865  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:34.040511  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:36.539322  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:38.539700  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:41.040333  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:43.539636  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:45.540134  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:48.040425  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:50.539475  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:52.539590  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:54.540310  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:57.040311  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:59.539775  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:02.040207  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:04.540336  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:07.039774  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:09.039928  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:11.040136  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:13.540022  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:16.040513  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:18.539457  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:21.040423  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:23.539701  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:26.039677  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:28.539400  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:30.540154  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:33.039502  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:35.039801  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:37.539221  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:39.539681  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:41.539999  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:44.040284  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:46.540320  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:49.039837  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:51.540123  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:54.040292  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:56.540637  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:59.039749  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:01.040507  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:03.540344  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:06.039881  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:08.040230  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:10.539463  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:12.540251  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:15.040305  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:17.539620  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:20.039549  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:22.539641  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:25.039622  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:27.539385  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:29.539878  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:31.540249  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:33.540339  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:35.541025  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:38.039663  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:40.539522  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:42.540000  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:45.040231  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:47.540283  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:50.039510  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:52.039949  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:54.540144  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:57.039966  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:59.040209  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:01.539473  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:03.540044  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:06.040183  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:08.040423  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:10.539873  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:13.039961  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:15.040246  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:17.539604  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:19.539765  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:22.040021  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:24.539835  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:26.540240  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:28.540555  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:31.039426  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:33.040327  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:35.040601  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:37.540256  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:40.039584  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:42.539492  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:44.539613  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:46.540433  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:49.039750  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:51.040314  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:53.040407  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:55.540422  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:58.040486  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:28:00.540148  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:28:03.039402  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:28:05.040045  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:28:07.040112  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:28:09.539484  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:28:11.539916  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:28:14.040357  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:28:16.040410  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:28:18.539390  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:28:20.539944  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:28:22.540064  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:28:25.039880  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:28:27.539325  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:28:29.540282  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:28:30.037464  283599 pod_ready.go:81] duration metric: took 4m0.003103432s waiting for pod "coredns-565d847f94-mrkjn" in "kube-system" namespace to be "Ready" ...
	E0921 22:28:30.037491  283599 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-565d847f94-mrkjn" in "kube-system" namespace to be "Ready" (will not retry!)
	I0921 22:28:30.037512  283599 pod_ready.go:38] duration metric: took 4m0.007931264s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0921 22:28:30.037542  283599 kubeadm.go:631] restartCluster took 4m11.484871611s
	W0921 22:28:30.037694  283599 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0921 22:28:30.037731  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0921 22:28:32.836415  283599 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (2.798662315s)
	I0921 22:28:32.836470  283599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0921 22:28:32.846281  283599 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0921 22:28:32.853286  283599 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0921 22:28:32.853347  283599 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0921 22:28:32.860321  283599 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0921 22:28:32.860372  283599 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0921 22:28:32.899444  283599 kubeadm.go:317] [init] Using Kubernetes version: v1.25.2
	I0921 22:28:32.899530  283599 kubeadm.go:317] [preflight] Running pre-flight checks
	I0921 22:28:32.927597  283599 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I0921 22:28:32.927684  283599 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1017-gcp
	I0921 22:28:32.927762  283599 kubeadm.go:317] OS: Linux
	I0921 22:28:32.927817  283599 kubeadm.go:317] CGROUPS_CPU: enabled
	I0921 22:28:32.927857  283599 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I0921 22:28:32.927895  283599 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I0921 22:28:32.927957  283599 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I0921 22:28:32.928004  283599 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I0921 22:28:32.928045  283599 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I0921 22:28:32.928083  283599 kubeadm.go:317] CGROUPS_PIDS: enabled
	I0921 22:28:32.928121  283599 kubeadm.go:317] CGROUPS_HUGETLB: enabled
	I0921 22:28:32.928158  283599 kubeadm.go:317] CGROUPS_BLKIO: enabled
	I0921 22:28:32.994267  283599 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0921 22:28:32.994393  283599 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0921 22:28:32.994471  283599 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0921 22:28:33.113433  283599 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0921 22:28:33.118993  283599 out.go:204]   - Generating certificates and keys ...
	I0921 22:28:33.119145  283599 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0921 22:28:33.119247  283599 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0921 22:28:33.119310  283599 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0921 22:28:33.119362  283599 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I0921 22:28:33.119432  283599 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I0921 22:28:33.119501  283599 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I0921 22:28:33.119554  283599 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I0921 22:28:33.119605  283599 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I0921 22:28:33.119666  283599 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0921 22:28:33.119759  283599 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0921 22:28:33.119797  283599 kubeadm.go:317] [certs] Using the existing "sa" key
	I0921 22:28:33.119873  283599 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0921 22:28:33.240892  283599 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0921 22:28:33.319256  283599 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0921 22:28:33.514290  283599 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0921 22:28:33.579294  283599 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0921 22:28:33.591185  283599 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0921 22:28:33.591951  283599 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0921 22:28:33.592077  283599 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I0921 22:28:33.671909  283599 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0921 22:28:33.674209  283599 out.go:204]   - Booting up control plane ...
	I0921 22:28:33.674356  283599 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0921 22:28:33.674478  283599 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0921 22:28:33.675328  283599 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0921 22:28:33.677339  283599 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0921 22:28:33.679453  283599 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0921 22:28:40.182528  283599 kubeadm.go:317] [apiclient] All control plane components are healthy after 6.502979 seconds
	I0921 22:28:40.182719  283599 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0921 22:28:40.191775  283599 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0921 22:28:40.708308  283599 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs
	I0921 22:28:40.708506  283599 kubeadm.go:317] [mark-control-plane] Marking the node default-k8s-different-port-20220921221118-10174 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0921 22:28:41.216221  283599 kubeadm.go:317] [bootstrap-token] Using token: 7zktge.i7kw817sdpmpqput
	I0921 22:28:41.217917  283599 out.go:204]   - Configuring RBAC rules ...
	I0921 22:28:41.218062  283599 kubeadm.go:317] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0921 22:28:41.221048  283599 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0921 22:28:41.225663  283599 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0921 22:28:41.227873  283599 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0921 22:28:41.229840  283599 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0921 22:28:41.231693  283599 kubeadm.go:317] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0921 22:28:41.238509  283599 kubeadm.go:317] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0921 22:28:41.448788  283599 kubeadm.go:317] [addons] Applied essential addon: CoreDNS
	I0921 22:28:41.684596  283599 kubeadm.go:317] [addons] Applied essential addon: kube-proxy
	I0921 22:28:41.686021  283599 kubeadm.go:317] 
	I0921 22:28:41.686112  283599 kubeadm.go:317] Your Kubernetes control-plane has initialized successfully!
	I0921 22:28:41.686121  283599 kubeadm.go:317] 
	I0921 22:28:41.686213  283599 kubeadm.go:317] To start using your cluster, you need to run the following as a regular user:
	I0921 22:28:41.686221  283599 kubeadm.go:317] 
	I0921 22:28:41.686253  283599 kubeadm.go:317]   mkdir -p $HOME/.kube
	I0921 22:28:41.687200  283599 kubeadm.go:317]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0921 22:28:41.687275  283599 kubeadm.go:317]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0921 22:28:41.687282  283599 kubeadm.go:317] 
	I0921 22:28:41.687347  283599 kubeadm.go:317] Alternatively, if you are the root user, you can run:
	I0921 22:28:41.687361  283599 kubeadm.go:317] 
	I0921 22:28:41.687420  283599 kubeadm.go:317]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0921 22:28:41.687443  283599 kubeadm.go:317] 
	I0921 22:28:41.687516  283599 kubeadm.go:317] You should now deploy a pod network to the cluster.
	I0921 22:28:41.687626  283599 kubeadm.go:317] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0921 22:28:41.687754  283599 kubeadm.go:317]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0921 22:28:41.687768  283599 kubeadm.go:317] 
	I0921 22:28:41.687856  283599 kubeadm.go:317] You can now join any number of control-plane nodes by copying certificate authorities
	I0921 22:28:41.687945  283599 kubeadm.go:317] and service account keys on each node and then running the following as root:
	I0921 22:28:41.687952  283599 kubeadm.go:317] 
	I0921 22:28:41.688054  283599 kubeadm.go:317]   kubeadm join control-plane.minikube.internal:8444 --token 7zktge.i7kw817sdpmpqput \
	I0921 22:28:41.688176  283599 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:b419e756666ce2e30bdac31039f21ad7242d26e2b9c03a55c5bc6fe8dcfddcf7 \
	I0921 22:28:41.688202  283599 kubeadm.go:317] 	--control-plane 
	I0921 22:28:41.688207  283599 kubeadm.go:317] 
	I0921 22:28:41.688304  283599 kubeadm.go:317] Then you can join any number of worker nodes by running the following on each as root:
	I0921 22:28:41.688309  283599 kubeadm.go:317] 
	I0921 22:28:41.688403  283599 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8444 --token 7zktge.i7kw817sdpmpqput \
	I0921 22:28:41.688525  283599 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:b419e756666ce2e30bdac31039f21ad7242d26e2b9c03a55c5bc6fe8dcfddcf7 
	I0921 22:28:41.691473  283599 kubeadm.go:317] W0921 22:28:32.894416    3309 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0921 22:28:41.691806  283599 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1017-gcp\n", err: exit status 1
	I0921 22:28:41.691944  283599 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0921 22:28:41.691973  283599 cni.go:95] Creating CNI manager for ""
	I0921 22:28:41.691983  283599 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0921 22:28:41.694185  283599 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0921 22:28:41.695783  283599 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0921 22:28:41.699760  283599 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.2/kubectl ...
	I0921 22:28:41.699784  283599 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0921 22:28:41.776183  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0921 22:28:42.446104  283599 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0921 22:28:42.446180  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:42.446216  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl label nodes minikube.k8s.io/version=v1.27.0 minikube.k8s.io/commit=937c68716dfaac5b5ffa3b6655158d5d3472b8c4 minikube.k8s.io/name=default-k8s-different-port-20220921221118-10174 minikube.k8s.io/updated_at=2022_09_21T22_28_42_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:42.524814  283599 ops.go:34] apiserver oom_adj: -16
	I0921 22:28:42.524918  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:43.099884  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:43.600017  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:44.099303  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:44.599933  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:45.100173  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:45.599961  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:46.099843  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:46.599840  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:47.099465  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:47.599512  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:48.099998  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:48.599598  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:49.099840  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:49.599433  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:50.099931  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:50.599355  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:51.099363  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:51.599865  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:52.099400  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:52.600056  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:53.100255  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:53.599772  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:53.668975  283599 kubeadm.go:1067] duration metric: took 11.222848116s to wait for elevateKubeSystemPrivileges.
	I0921 22:28:53.669016  283599 kubeadm.go:398] StartCluster complete in 4m35.159165946s
	I0921 22:28:53.669039  283599 settings.go:142] acquiring lock: {Name:mk2e017b0c75e33bad2b3a546d00214b38a21694 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:28:53.669157  283599 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
	I0921 22:28:53.670820  283599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig: {Name:mkdec5faf1bb182b50a3cd5458b352f38075dc15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:28:54.187769  283599 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20220921221118-10174" rescaled to 1
	I0921 22:28:54.187839  283599 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0921 22:28:54.187870  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0921 22:28:54.187894  283599 addons.go:412] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0921 22:28:54.190631  283599 out.go:177] * Verifying Kubernetes components...
	I0921 22:28:54.187957  283599 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-different-port-20220921221118-10174"
	I0921 22:28:54.187964  283599 addons.go:65] Setting dashboard=true in profile "default-k8s-different-port-20220921221118-10174"
	I0921 22:28:54.187970  283599 addons.go:65] Setting default-storageclass=true in profile "default-k8s-different-port-20220921221118-10174"
	I0921 22:28:54.188002  283599 addons.go:65] Setting metrics-server=true in profile "default-k8s-different-port-20220921221118-10174"
	I0921 22:28:54.188076  283599 config.go:180] Loaded profile config "default-k8s-different-port-20220921221118-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.2
	I0921 22:28:54.192035  283599 addons.go:153] Setting addon storage-provisioner=true in "default-k8s-different-port-20220921221118-10174"
	W0921 22:28:54.192079  283599 addons.go:162] addon storage-provisioner should already be in state true
	I0921 22:28:54.192091  283599 addons.go:153] Setting addon dashboard=true in "default-k8s-different-port-20220921221118-10174"
	W0921 22:28:54.192114  283599 addons.go:162] addon dashboard should already be in state true
	I0921 22:28:54.192162  283599 host.go:66] Checking if "default-k8s-different-port-20220921221118-10174" exists ...
	I0921 22:28:54.192210  283599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0921 22:28:54.192299  283599 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20220921221118-10174"
	I0921 22:28:54.192580  283599 addons.go:153] Setting addon metrics-server=true in "default-k8s-different-port-20220921221118-10174"
	W0921 22:28:54.192616  283599 addons.go:162] addon metrics-server should already be in state true
	I0921 22:28:54.192633  283599 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221118-10174 --format={{.State.Status}}
	I0921 22:28:54.192666  283599 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221118-10174 --format={{.State.Status}}
	I0921 22:28:54.192163  283599 host.go:66] Checking if "default-k8s-different-port-20220921221118-10174" exists ...
	I0921 22:28:54.192666  283599 host.go:66] Checking if "default-k8s-different-port-20220921221118-10174" exists ...
	I0921 22:28:54.193362  283599 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221118-10174 --format={{.State.Status}}
	I0921 22:28:54.193439  283599 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221118-10174 --format={{.State.Status}}
	I0921 22:28:54.234974  283599 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0921 22:28:54.236667  283599 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0921 22:28:54.236692  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0921 22:28:54.236745  283599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:28:54.240000  283599 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.6.0
	I0921 22:28:54.239390  283599 addons.go:153] Setting addon default-storageclass=true in "default-k8s-different-port-20220921221118-10174"
	I0921 22:28:54.241874  283599 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0921 22:28:54.244335  283599 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0921 22:28:54.244363  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	W0921 22:28:54.241874  283599 addons.go:162] addon default-storageclass should already be in state true
	I0921 22:28:54.244424  283599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:28:54.244454  283599 host.go:66] Checking if "default-k8s-different-port-20220921221118-10174" exists ...
	I0921 22:28:54.244956  283599 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221118-10174 --format={{.State.Status}}
	I0921 22:28:54.246658  283599 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0921 22:28:54.248082  283599 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0921 22:28:54.248109  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0921 22:28:54.248165  283599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:28:54.272909  283599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49443 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa Username:docker}
	I0921 22:28:54.273873  283599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49443 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa Username:docker}
	I0921 22:28:54.277163  283599 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0921 22:28:54.277186  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0921 22:28:54.277236  283599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:28:54.290041  283599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49443 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa Username:docker}
	I0921 22:28:54.318706  283599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49443 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa Username:docker}
	I0921 22:28:54.398932  283599 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20220921221118-10174" to be "Ready" ...
	I0921 22:28:54.399014  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0921 22:28:54.496523  283599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0921 22:28:54.498431  283599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0921 22:28:54.499591  283599 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0921 22:28:54.499650  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0921 22:28:54.501640  283599 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0921 22:28:54.501663  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0921 22:28:54.594519  283599 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0921 22:28:54.594561  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0921 22:28:54.596768  283599 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0921 22:28:54.596847  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0921 22:28:54.690036  283599 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0921 22:28:54.690071  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0921 22:28:54.700119  283599 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0921 22:28:54.700197  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0921 22:28:54.876320  283599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0921 22:28:54.883544  283599 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0921 22:28:54.883571  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4206 bytes)
	I0921 22:28:54.977006  283599 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0921 22:28:54.977040  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0921 22:28:55.079240  283599 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0921 22:28:55.079273  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0921 22:28:55.176309  283599 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0921 22:28:55.176344  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0921 22:28:55.276282  283599 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0921 22:28:55.276317  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0921 22:28:55.379016  283599 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0921 22:28:55.379044  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0921 22:28:55.386242  283599 start.go:810] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS
	I0921 22:28:55.399129  283599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0921 22:28:55.595061  283599 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.098450009s)
	I0921 22:28:55.786581  283599 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.288109437s)
	I0921 22:28:56.081753  283599 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.205376891s)
	I0921 22:28:56.081804  283599 addons.go:383] Verifying addon metrics-server=true in "default-k8s-different-port-20220921221118-10174"
	I0921 22:28:56.387178  283599 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0921 22:28:56.388690  283599 addons.go:414] enableAddons completed in 2.200797183s
	I0921 22:28:56.404853  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:28:58.405031  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:00.405582  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:02.905572  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:05.405473  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:07.904364  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:09.905589  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:12.405034  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:14.905741  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:17.404952  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:19.405175  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:21.405392  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:23.405620  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:25.905592  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:27.905775  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:30.405483  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:32.904863  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:35.404709  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:37.905690  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:40.405229  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:42.905360  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:44.905907  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:47.404631  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:49.405402  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:51.904997  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:53.905667  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:56.405228  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:58.405705  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:00.905348  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:03.404779  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:05.404833  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:07.405804  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:09.904912  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:11.905117  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:13.905422  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:15.906020  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:18.404644  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:20.404682  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:22.405540  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:24.905233  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:27.404679  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:29.904692  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:31.905266  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:34.405088  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:36.405476  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:38.904414  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:40.905386  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:43.404507  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:45.405356  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:47.904571  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:50.405311  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:52.904564  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:54.905119  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:57.405076  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:59.405121  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:01.904816  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:03.905408  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:05.905565  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:08.404718  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:10.405173  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:12.905041  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:14.905498  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:17.405656  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:19.905667  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:22.405514  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:24.904738  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:27.404689  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:29.405353  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:31.904926  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:34.405471  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:36.905606  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:39.404550  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:41.405513  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:43.905655  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:46.405308  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:48.405699  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:50.905270  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:53.405205  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:55.405540  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:57.905798  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:00.405370  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:02.405480  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:04.904649  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:06.905338  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:09.404845  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:11.405472  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:13.905469  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:16.405211  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:18.405365  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:20.904698  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:23.405458  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:25.905299  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:27.905466  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:29.905633  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:32.404583  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:34.404795  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:36.405323  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:38.405395  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:40.904581  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:42.905533  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:45.405100  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:47.405337  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:49.405417  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:51.905042  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:54.404654  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:54.406831  283599 node_ready.go:38] duration metric: took 4m0.00786279s waiting for node "default-k8s-different-port-20220921221118-10174" to be "Ready" ...
	I0921 22:32:54.409456  283599 out.go:177] 
	W0921 22:32:54.411031  283599 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0921 22:32:54.411055  283599 out.go:239] * 
	* 
	W0921 22:32:54.411890  283599 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0921 22:32:54.413449  283599 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p default-k8s-different-port-20220921221118-10174 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220921221118-10174
helpers_test.go:235: (dbg) docker inspect default-k8s-different-port-20220921221118-10174:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "37728b19138a640b956531d5c576658820e978bb8c95c5dc17cd7f348fdb8112",
	        "Created": "2022-09-21T22:11:25.759772693Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 283906,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-09-21T22:24:02.351378691Z",
	            "FinishedAt": "2022-09-21T22:24:01.088670196Z"
	        },
	        "Image": "sha256:5f58fddaff4349397c9f51a6b73926a9b118af22b4ccb4492e84c74d0b59dcd4",
	        "ResolvConfPath": "/var/lib/docker/containers/37728b19138a640b956531d5c576658820e978bb8c95c5dc17cd7f348fdb8112/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/37728b19138a640b956531d5c576658820e978bb8c95c5dc17cd7f348fdb8112/hostname",
	        "HostsPath": "/var/lib/docker/containers/37728b19138a640b956531d5c576658820e978bb8c95c5dc17cd7f348fdb8112/hosts",
	        "LogPath": "/var/lib/docker/containers/37728b19138a640b956531d5c576658820e978bb8c95c5dc17cd7f348fdb8112/37728b19138a640b956531d5c576658820e978bb8c95c5dc17cd7f348fdb8112-json.log",
	        "Name": "/default-k8s-different-port-20220921221118-10174",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-different-port-20220921221118-10174:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-different-port-20220921221118-10174",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a8a0164abb3fd782df784bf984b10c980312d3270f36e5612924a87806ba5a19-init/diff:/var/lib/docker/overlay2/e464f9a636cb97832b8aa5685163afb26b14d037782877c3831e0f261b6fb12b/diff:/var/lib/docker/overlay2/6f2f230003f0711dfcff3390931e14f9abd39be1cd2b674079cb6a0fc99dab7f/diff:/var/lib/docker/overlay2/a539506727a1471a83b57694b6923a137699d9d47fe00601ffa27c34d63d17c2/diff:/var/lib/docker/overlay2/7dc48fda616494e9959c05191521a1f5bdf6be0555a35fc1d7ce85b87a0522ad/diff:/var/lib/docker/overlay2/6a1c06da85f8c03c9712693ca8a75d804e7b9f135a6b92e3929385cc791b0b3a/diff:/var/lib/docker/overlay2/d178ee6cf0c027c7943422dc71b38eb93ee131daede423f6cf283424eaebc517/diff:/var/lib/docker/overlay2/f4c8d81a89934a29db99d9ed4b4542a8882465ab2b6419ba4a3fe09214d25d9e/diff:/var/lib/docker/overlay2/4411575e37772974371f716b7c4abb3053138ddcaf53d883eb3116b4c3a37f78/diff:/var/lib/docker/overlay2/a3f8db0c0d790efcede2f41e470d293e280b022ff0bbb46c8bc25f190e3146fc/diff:/var/lib/docker/overlay2/a5bfd3
703378d6d5b2622598a13b4b2eb9203659c1ffff9aa93944ef8d20d22c/diff:/var/lib/docker/overlay2/3c24c9c8a37c1a488d12209df991b3857ccbc448a3329e46a8d48fc28ef81b83/diff:/var/lib/docker/overlay2/199771963b33bc809e912a6d428c556897f0ba04db2268e08695cb7ce0ee3bad/diff:/var/lib/docker/overlay2/4bc13570e456a5d261fbaf68140f2e5b2ea10c6683623858e16ee3ed1b117ef7/diff:/var/lib/docker/overlay2/9831fdfd44eff7e3f4780d10f5480f090b388735bb188e7a1e93251b7769d8f3/diff:/var/lib/docker/overlay2/28126fe31ddfbf99d4588c5ac750503b9b6cee8f2e00781941cff5f4323d1c76/diff:/var/lib/docker/overlay2/173221e8ff5f6ec22870429dc8bb01272ade638fe47275d1daa1ce07952c8c8c/diff:/var/lib/docker/overlay2/53bc2e4fa25c7c694765cf4d57e59552b26eed732ba59b8d74b221ea0ada7479/diff:/var/lib/docker/overlay2/0175a1cb7e1c92247b6cbac270da49d811d3508e193c1c2e47bef8cb769a47bf/diff:/var/lib/docker/overlay2/1651afeb98cf83af553f3e0a4dcb564af84cb440147702fddc0133cdae08e879/diff:/var/lib/docker/overlay2/64c0563be8f8902fe0e9d207f839be98da0e23d6d839403a7e5498f0468e6b75/diff:/var/lib/d
ocker/overlay2/c9ce1f7c22e681e0e279ce7656144b4c75e2d24f7297acad69e621f2756b75e8/diff:/var/lib/docker/overlay2/469e6b8bbc078383af3dc64c254906774ab56a833c4de1dd39b46bfa27fee3b0/diff:/var/lib/docker/overlay2/797536b923ada4261904843a51188063c401089eae5fec971f1dfb3c21d000a8/diff:/var/lib/docker/overlay2/9d5cbab97d75347795fc5814806362514e12ffc998b48411ecf1d23f980badf6/diff:/var/lib/docker/overlay2/8681f41e3df379cfdc8fd52b2e5837512b8e36a64c87fb159548a876d16e691d/diff:/var/lib/docker/overlay2/78ea1917a0aca968f65d796cbc2ee95d7d190a700496b9e47b29884fe13b1bec/diff:/var/lib/docker/overlay2/c3c231a74b7992f1fdc04b623c10d7791eab4fd97c803ed2610d595097c682c2/diff:/var/lib/docker/overlay2/f7b38a2a87d3958318f3a4b244e33d8096cd001a8fcb916eec022b0679382900/diff:/var/lib/docker/overlay2/1107be3397c1dfc460decaf307982d427cc431beb7938fb0e78f4f7cbc0afd3b/diff:/var/lib/docker/overlay2/59319d0c4cc381deff6e51ba1cc718b7cfd4fc557e74b8e83bd228afca501c8c/diff:/var/lib/docker/overlay2/9d031408a1ab9825246b7bba81db2596cb92e70fbedc70fba9be1d25320
8529f/diff:/var/lib/docker/overlay2/5800b717c814431baf3cc17520b099e7e011915cdeb46ed6360ec2f831a926ec/diff:/var/lib/docker/overlay2/6660d8dfc6dbb7f41f871ff7ec7e0fdfdb2ca3480141cdd4cce92869cb5382bf/diff:/var/lib/docker/overlay2/525a566b79110ff18e78880f2652b8d9cfcfbdbf3272fedb650933fcbbf869bd/diff:/var/lib/docker/overlay2/a0730aa8e0952fc661048d3a7f986ff246a5b2a29ad4e8195970bb407e3cecfc/diff:/var/lib/docker/overlay2/f9d25765ae914f5f013b5f46292f03c39d82f675d2102904f5fabecf07189337/diff:/var/lib/docker/overlay2/b7b34d126964ff60bc0fb625da7343e53e8bb4e77d5e72e33257c28b3d395ca3/diff:/var/lib/docker/overlay2/8514d6cc3775ebdd86f683a78503419bc97346c0059ac0d48563d2edec9727f0/diff:/var/lib/docker/overlay2/dd6f7bead5718d42040b4ee90ce09fa15720d04f3c58e2964b1d5b241bf5b7ed/diff:/var/lib/docker/overlay2/1a8adeb0355a42e9284e241b527930f4f6b7108d459c10550d5d0c4451f9e924/diff:/var/lib/docker/overlay2/703991449e0070d93945e4435da56144ce54e461e877e0b084144663d46b10f6/diff:/var/lib/docker/overlay2/65855711cfa445d374537e6268fd1bea2f62ea
bf2f590842933254b4755050dd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a8a0164abb3fd782df784bf984b10c980312d3270f36e5612924a87806ba5a19/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a8a0164abb3fd782df784bf984b10c980312d3270f36e5612924a87806ba5a19/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a8a0164abb3fd782df784bf984b10c980312d3270f36e5612924a87806ba5a19/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-different-port-20220921221118-10174",
	                "Source": "/var/lib/docker/volumes/default-k8s-different-port-20220921221118-10174/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-different-port-20220921221118-10174",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-different-port-20220921221118-10174",
	                "name.minikube.sigs.k8s.io": "default-k8s-different-port-20220921221118-10174",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "36a93f9568ff0607fd762c264a5429499a3bd1c6641a087329f11f0872de9644",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49443"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49442"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49439"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49441"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49440"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/36a93f9568ff",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-different-port-20220921221118-10174": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "37728b19138a",
	                        "default-k8s-different-port-20220921221118-10174"
	                    ],
	                    "NetworkID": "e093ea2ee154cf6d0e5d3b4a191700b36287f8ecd49e1b54f684a8f299ea6b79",
	                    "EndpointID": "309e329d1f6701bbb84d1c083ed29999da2a9bd8b0ce2dba5c615ae7a0f15ea3",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220921221118-10174 -n default-k8s-different-port-20220921221118-10174
helpers_test.go:244: <<< TestStartStop/group/default-k8s-different-port/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-different-port-20220921221118-10174 logs -n 25
helpers_test.go:252: TestStartStop/group/default-k8s-different-port/serial/SecondStart logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| Command |                            Args                            |                     Profile                     |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| unpause | -p                                                         | old-k8s-version-20220921220722-10174            | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | old-k8s-version-20220921220722-10174                       |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | old-k8s-version-20220921220722-10174            | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | old-k8s-version-20220921220722-10174                       |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | old-k8s-version-20220921220722-10174            | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | old-k8s-version-20220921220722-10174                       |                                                 |         |         |                     |                     |
	| start   | -p newest-cni-20220921221720-10174 --memory=2200           | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.2                               |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | embed-certs-20220921220439-10174                | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | embed-certs-20220921220439-10174                           |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                     |                     |
	| stop    | -p                                                         | embed-certs-20220921220439-10174                | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | embed-certs-20220921220439-10174                           |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | embed-certs-20220921220439-10174                | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | embed-certs-20220921220439-10174                           |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                     |                     |
	| start   | -p                                                         | embed-certs-20220921220439-10174                | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC |                     |
	|         | embed-certs-20220921220439-10174                           |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                     |                     |
	|         | --wait=true --embed-certs                                  |                                                 |         |         |                     |                     |
	|         | --driver=docker                                            |                                                 |         |         |                     |                     |
	|         | --container-runtime=containerd                             |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.2                               |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | newest-cni-20220921221720-10174                            |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                     |                     |
	| stop    | -p                                                         | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | newest-cni-20220921221720-10174                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | newest-cni-20220921221720-10174                            |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                     |                     |
	| start   | -p newest-cni-20220921221720-10174 --memory=2200           | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:18 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.2                               |                                                 |         |         |                     |                     |
	| ssh     | -p                                                         | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:18 UTC | 21 Sep 22 22:18 UTC |
	|         | newest-cni-20220921221720-10174                            |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                     |                     |
	| pause   | -p                                                         | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:18 UTC | 21 Sep 22 22:18 UTC |
	|         | newest-cni-20220921221720-10174                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| unpause | -p                                                         | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:18 UTC | 21 Sep 22 22:18 UTC |
	|         | newest-cni-20220921221720-10174                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:18 UTC | 21 Sep 22 22:18 UTC |
	|         | newest-cni-20220921221720-10174                            |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:18 UTC | 21 Sep 22 22:18 UTC |
	|         | newest-cni-20220921221720-10174                            |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | no-preload-20220921220832-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:21 UTC | 21 Sep 22 22:21 UTC |
	|         | no-preload-20220921220832-10174                            |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                     |                     |
	| stop    | -p                                                         | no-preload-20220921220832-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:21 UTC | 21 Sep 22 22:21 UTC |
	|         | no-preload-20220921220832-10174                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | no-preload-20220921220832-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:21 UTC | 21 Sep 22 22:21 UTC |
	|         | no-preload-20220921220832-10174                            |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                     |                     |
	| start   | -p                                                         | no-preload-20220921220832-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:21 UTC |                     |
	|         | no-preload-20220921220832-10174                            |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                     |                     |
	|         | --wait=true --preload=false                                |                                                 |         |         |                     |                     |
	|         | --driver=docker                                            |                                                 |         |         |                     |                     |
	|         | --container-runtime=containerd                             |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.2                               |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | default-k8s-different-port-20220921221118-10174 | jenkins | v1.27.0 | 21 Sep 22 22:23 UTC | 21 Sep 22 22:23 UTC |
	|         | default-k8s-different-port-20220921221118-10174            |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                     |                     |
	| stop    | -p                                                         | default-k8s-different-port-20220921221118-10174 | jenkins | v1.27.0 | 21 Sep 22 22:23 UTC | 21 Sep 22 22:24 UTC |
	|         | default-k8s-different-port-20220921221118-10174            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | default-k8s-different-port-20220921221118-10174 | jenkins | v1.27.0 | 21 Sep 22 22:24 UTC | 21 Sep 22 22:24 UTC |
	|         | default-k8s-different-port-20220921221118-10174            |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                     |                     |
	| start   | -p                                                         | default-k8s-different-port-20220921221118-10174 | jenkins | v1.27.0 | 21 Sep 22 22:24 UTC |                     |
	|         | default-k8s-different-port-20220921221118-10174            |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true                |                                                 |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker                      |                                                 |         |         |                     |                     |
	|         |  --container-runtime=containerd                            |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.2                               |                                                 |         |         |                     |                     |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/09/21 22:24:01
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.19.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0921 22:24:01.692796  283599 out.go:296] Setting OutFile to fd 1 ...
	I0921 22:24:01.693211  283599 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:24:01.693232  283599 out.go:309] Setting ErrFile to fd 2...
	I0921 22:24:01.693240  283599 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:24:01.693504  283599 root.go:333] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/bin
	I0921 22:24:01.694665  283599 out.go:303] Setting JSON to false
	I0921 22:24:01.696140  283599 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3993,"bootTime":1663795049,"procs":467,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1017-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0921 22:24:01.696247  283599 start.go:125] virtualization: kvm guest
	I0921 22:24:01.698874  283599 out.go:177] * [default-k8s-different-port-20220921221118-10174] minikube v1.27.0 on Ubuntu 20.04 (kvm/amd64)
	I0921 22:24:01.701214  283599 out.go:177]   - MINIKUBE_LOCATION=14995
	I0921 22:24:01.701128  283599 notify.go:214] Checking for updates...
	I0921 22:24:01.703092  283599 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0921 22:24:01.704791  283599 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
	I0921 22:24:01.706544  283599 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube
	I0921 22:24:01.708172  283599 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0921 22:23:57.318050  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:23:59.318317  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:01.710349  283599 config.go:180] Loaded profile config "default-k8s-different-port-20220921221118-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.2
	I0921 22:24:01.710930  283599 driver.go:365] Setting default libvirt URI to qemu:///system
	I0921 22:24:01.744026  283599 docker.go:137] docker version: linux-20.10.18
	I0921 22:24:01.744136  283599 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:24:01.840732  283599 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:true NGoroutines:44 SystemTime:2022-09-21 22:24:01.764457724 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1017-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.18 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0921 22:24:01.840851  283599 docker.go:254] overlay module found
	I0921 22:24:01.843051  283599 out.go:177] * Using the docker driver based on existing profile
	I0921 22:24:01.844347  283599 start.go:284] selected driver: docker
	I0921 22:24:01.844371  283599 start.go:808] validating driver "docker" against &{Name:default-k8s-different-port-20220921221118-10174 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:default-k8s-different-port-20220921221118-10174 Na
mespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountSt
ring:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 22:24:01.844475  283599 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0921 22:24:01.845300  283599 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:24:01.940944  283599 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:true NGoroutines:44 SystemTime:2022-09-21 22:24:01.86716064 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1017-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.18 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientI
nfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0921 22:24:01.941199  283599 start_flags.go:867] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0921 22:24:01.941223  283599 cni.go:95] Creating CNI manager for ""
	I0921 22:24:01.941231  283599 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0921 22:24:01.941249  283599 start_flags.go:316] config:
	{Name:default-k8s-different-port-20220921221118-10174 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:default-k8s-different-port-20220921221118-10174 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDo
main:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 22:24:01.944240  283599 out.go:177] * Starting control plane node default-k8s-different-port-20220921221118-10174 in cluster default-k8s-different-port-20220921221118-10174
	I0921 22:24:01.945596  283599 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0921 22:24:01.946905  283599 out.go:177] * Pulling base image ...
	I0921 22:24:01.948255  283599 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime containerd
	I0921 22:24:01.948306  283599 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.2-containerd-overlay2-amd64.tar.lz4
	I0921 22:24:01.948321  283599 cache.go:57] Caching tarball of preloaded images
	I0921 22:24:01.948361  283599 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	I0921 22:24:01.948572  283599 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0921 22:24:01.948588  283599 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.2 on containerd
	I0921 22:24:01.948702  283599 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/config.json ...
	I0921 22:24:01.976413  283599 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon, skipping pull
	I0921 22:24:01.976445  283599 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c exists in daemon, skipping load
	I0921 22:24:01.976457  283599 cache.go:208] Successfully downloaded all kic artifacts
	I0921 22:24:01.976502  283599 start.go:364] acquiring machines lock for default-k8s-different-port-20220921221118-10174: {Name:mk6a2906d520bc1db61074ef435cf249d094e940 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:24:01.976622  283599 start.go:368] acquired machines lock for "default-k8s-different-port-20220921221118-10174" in 78.111µs
	I0921 22:24:01.976652  283599 start.go:96] Skipping create...Using existing machine configuration
	I0921 22:24:01.976660  283599 fix.go:55] fixHost starting: 
	I0921 22:24:01.976899  283599 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221118-10174 --format={{.State.Status}}
	I0921 22:24:02.002084  283599 fix.go:103] recreateIfNeeded on default-k8s-different-port-20220921221118-10174: state=Stopped err=<nil>
	W0921 22:24:02.002122  283599 fix.go:129] unexpected machine state, will restart: <nil>
	I0921 22:24:02.004632  283599 out.go:177] * Restarting existing docker container for "default-k8s-different-port-20220921221118-10174" ...
	I0921 22:24:00.289698  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:02.790230  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:02.006307  283599 cli_runner.go:164] Run: docker start default-k8s-different-port-20220921221118-10174
	I0921 22:24:02.358108  283599 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221118-10174 --format={{.State.Status}}
	I0921 22:24:02.385298  283599 kic.go:415] container "default-k8s-different-port-20220921221118-10174" state is running.
	I0921 22:24:02.385684  283599 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220921221118-10174
	I0921 22:24:02.412757  283599 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/config.json ...
	I0921 22:24:02.412997  283599 machine.go:88] provisioning docker machine ...
	I0921 22:24:02.413031  283599 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20220921221118-10174"
	I0921 22:24:02.413108  283599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:24:02.438229  283599 main.go:134] libmachine: Using SSH client type: native
	I0921 22:24:02.438400  283599 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ec520] 0x7ef6a0 <nil>  [] 0s} 127.0.0.1 49443 <nil> <nil>}
	I0921 22:24:02.438416  283599 main.go:134] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20220921221118-10174 && echo "default-k8s-different-port-20220921221118-10174" | sudo tee /etc/hostname
	I0921 22:24:02.439038  283599 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34230->127.0.0.1:49443: read: connection reset by peer
	I0921 22:24:05.584682  283599 main.go:134] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20220921221118-10174
	
	I0921 22:24:05.584766  283599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:24:05.608825  283599 main.go:134] libmachine: Using SSH client type: native
	I0921 22:24:05.609026  283599 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ec520] 0x7ef6a0 <nil>  [] 0s} 127.0.0.1 49443 <nil> <nil>}
	I0921 22:24:05.609059  283599 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20220921221118-10174' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20220921221118-10174/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20220921221118-10174' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0921 22:24:05.739656  283599 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0921 22:24:05.739694  283599 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube}
	I0921 22:24:05.739749  283599 ubuntu.go:177] setting up certificates
	I0921 22:24:05.739765  283599 provision.go:83] configureAuth start
	I0921 22:24:05.739824  283599 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220921221118-10174
	I0921 22:24:05.764789  283599 provision.go:138] copyHostCerts
	I0921 22:24:05.764839  283599 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.pem, removing ...
	I0921 22:24:05.764846  283599 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.pem
	I0921 22:24:05.764904  283599 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.pem (1078 bytes)
	I0921 22:24:05.764993  283599 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cert.pem, removing ...
	I0921 22:24:05.765005  283599 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cert.pem
	I0921 22:24:05.765028  283599 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cert.pem (1123 bytes)
	I0921 22:24:05.765086  283599 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/key.pem, removing ...
	I0921 22:24:05.765095  283599 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/key.pem
	I0921 22:24:05.765118  283599 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/key.pem (1675 bytes)
	I0921 22:24:05.765169  283599 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20220921221118-10174 san=[192.168.85.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20220921221118-10174]
	I0921 22:24:05.914466  283599 provision.go:172] copyRemoteCerts
	I0921 22:24:05.914534  283599 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0921 22:24:05.914564  283599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:24:05.939805  283599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49443 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa Username:docker}
	I0921 22:24:06.031315  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0921 22:24:06.048618  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server.pem --> /etc/docker/server.pem (1310 bytes)
	I0921 22:24:06.065530  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0921 22:24:06.083800  283599 provision.go:86] duration metric: configureAuth took 344.021748ms
	I0921 22:24:06.083828  283599 ubuntu.go:193] setting minikube options for container-runtime
	I0921 22:24:06.083988  283599 config.go:180] Loaded profile config "default-k8s-different-port-20220921221118-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.2
	I0921 22:24:06.083999  283599 machine.go:91] provisioned docker machine in 3.670987023s
	I0921 22:24:06.084006  283599 start.go:300] post-start starting for "default-k8s-different-port-20220921221118-10174" (driver="docker")
	I0921 22:24:06.084012  283599 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0921 22:24:06.084049  283599 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0921 22:24:06.084088  283599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:24:06.108286  283599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49443 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa Username:docker}
	I0921 22:24:06.203139  283599 ssh_runner.go:195] Run: cat /etc/os-release
	I0921 22:24:06.205811  283599 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0921 22:24:06.205839  283599 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0921 22:24:06.205852  283599 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0921 22:24:06.205864  283599 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0921 22:24:06.205880  283599 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/addons for local assets ...
	I0921 22:24:06.205944  283599 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files for local assets ...
	I0921 22:24:06.206037  283599 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/101742.pem -> 101742.pem in /etc/ssl/certs
	I0921 22:24:06.206142  283599 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0921 22:24:06.212569  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/101742.pem --> /etc/ssl/certs/101742.pem (1708 bytes)
	I0921 22:24:06.229418  283599 start.go:303] post-start completed in 145.398445ms
	I0921 22:24:06.229483  283599 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:24:06.229517  283599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:24:06.253305  283599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49443 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa Username:docker}
	I0921 22:24:06.340119  283599 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:24:06.344050  283599 fix.go:57] fixHost completed within 4.367385464s
	I0921 22:24:06.344071  283599 start.go:83] releasing machines lock for "default-k8s-different-port-20220921221118-10174", held for 4.367430848s
	I0921 22:24:06.344157  283599 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220921221118-10174
	I0921 22:24:06.368445  283599 ssh_runner.go:195] Run: systemctl --version
	I0921 22:24:06.368501  283599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:24:06.368505  283599 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0921 22:24:06.368550  283599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:24:06.394444  283599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49443 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa Username:docker}
	I0921 22:24:06.396066  283599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49443 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa Username:docker}
	I0921 22:24:06.513229  283599 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0921 22:24:06.524587  283599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0921 22:24:06.533746  283599 docker.go:188] disabling docker service ...
	I0921 22:24:06.533795  283599 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0921 22:24:06.543075  283599 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0921 22:24:06.551813  283599 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0921 22:24:06.629483  283599 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0921 22:24:01.818416  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:04.317912  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:05.288966  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:07.290168  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:06.707030  283599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0921 22:24:06.717244  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0921 22:24:06.729638  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "registry.k8s.io/pause:3.8"|' -i /etc/containerd/config.toml"
	I0921 22:24:06.737194  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0921 22:24:06.744928  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0921 22:24:06.752650  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0921 22:24:06.760419  283599 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0921 22:24:06.766584  283599 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0921 22:24:06.772903  283599 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0921 22:24:06.844578  283599 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0921 22:24:06.917291  283599 start.go:450] Will wait 60s for socket path /run/containerd/containerd.sock
	I0921 22:24:06.917353  283599 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0921 22:24:06.921118  283599 start.go:471] Will wait 60s for crictl version
	I0921 22:24:06.921184  283599 ssh_runner.go:195] Run: sudo crictl version
	I0921 22:24:06.948257  283599 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-09-21T22:24:06Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0921 22:24:06.817672  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:09.317278  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:11.317829  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:09.789185  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:12.289080  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:13.817410  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:15.817496  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:17.995620  283599 ssh_runner.go:195] Run: sudo crictl version
	I0921 22:24:18.018705  283599 start.go:480] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.8
	RuntimeApiVersion:  v1alpha2
	I0921 22:24:18.018768  283599 ssh_runner.go:195] Run: containerd --version
	I0921 22:24:18.047337  283599 ssh_runner.go:195] Run: containerd --version
	I0921 22:24:18.078051  283599 out.go:177] * Preparing Kubernetes v1.25.2 on containerd 1.6.8 ...
	I0921 22:24:14.289667  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:16.789199  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:18.079491  283599 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220921221118-10174 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 22:24:18.103308  283599 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0921 22:24:18.106553  283599 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0921 22:24:18.115993  283599 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime containerd
	I0921 22:24:18.116056  283599 ssh_runner.go:195] Run: sudo crictl images --output json
	I0921 22:24:18.139896  283599 containerd.go:553] all images are preloaded for containerd runtime.
	I0921 22:24:18.139921  283599 containerd.go:467] Images already preloaded, skipping extraction
	I0921 22:24:18.139964  283599 ssh_runner.go:195] Run: sudo crictl images --output json
	I0921 22:24:18.163323  283599 containerd.go:553] all images are preloaded for containerd runtime.
	I0921 22:24:18.163344  283599 cache_images.go:84] Images are preloaded, skipping loading
	I0921 22:24:18.163382  283599 ssh_runner.go:195] Run: sudo crictl info
	I0921 22:24:18.186911  283599 cni.go:95] Creating CNI manager for ""
	I0921 22:24:18.186935  283599 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0921 22:24:18.186948  283599 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0921 22:24:18.186961  283599 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.25.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20220921221118-10174 NodeName:default-k8s-different-port-20220921221118-10174 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgr
oupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
	I0921 22:24:18.187074  283599 kubeadm.go:161] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "default-k8s-different-port-20220921221118-10174"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0921 22:24:18.187152  283599 kubeadm.go:962] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=default-k8s-different-port-20220921221118-10174 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.2 ClusterName:default-k8s-different-port-20220921221118-10174 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0921 22:24:18.187196  283599 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.2
	I0921 22:24:18.194012  283599 binaries.go:44] Found k8s binaries, skipping transfer
	I0921 22:24:18.194081  283599 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0921 22:24:18.200606  283599 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (540 bytes)
	I0921 22:24:18.212899  283599 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0921 22:24:18.224754  283599 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2076 bytes)
	I0921 22:24:18.236775  283599 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0921 22:24:18.239439  283599 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0921 22:24:18.248263  283599 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174 for IP: 192.168.85.2
	I0921 22:24:18.248377  283599 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.key
	I0921 22:24:18.248421  283599 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/proxy-client-ca.key
	I0921 22:24:18.248485  283599 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/client.key
	I0921 22:24:18.248538  283599 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/apiserver.key.43b9df8c
	I0921 22:24:18.248575  283599 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/proxy-client.key
	I0921 22:24:18.248658  283599 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/10174.pem (1338 bytes)
	W0921 22:24:18.248689  283599 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/10174_empty.pem, impossibly tiny 0 bytes
	I0921 22:24:18.248705  283599 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca-key.pem (1679 bytes)
	I0921 22:24:18.248729  283599 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem (1078 bytes)
	I0921 22:24:18.248758  283599 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/cert.pem (1123 bytes)
	I0921 22:24:18.248780  283599 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/key.pem (1675 bytes)
	I0921 22:24:18.248846  283599 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/101742.pem (1708 bytes)
	I0921 22:24:18.249439  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0921 22:24:18.265894  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0921 22:24:18.282128  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0921 22:24:18.298690  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0921 22:24:18.315323  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0921 22:24:18.331842  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0921 22:24:18.348196  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0921 22:24:18.364368  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0921 22:24:18.380401  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0921 22:24:18.396696  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/10174.pem --> /usr/share/ca-certificates/10174.pem (1338 bytes)
	I0921 22:24:18.413238  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/101742.pem --> /usr/share/ca-certificates/101742.pem (1708 bytes)
	I0921 22:24:18.429482  283599 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0921 22:24:18.441654  283599 ssh_runner.go:195] Run: openssl version
	I0921 22:24:18.446184  283599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0921 22:24:18.453215  283599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0921 22:24:18.456119  283599 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Sep 21 21:28 /usr/share/ca-certificates/minikubeCA.pem
	I0921 22:24:18.456166  283599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0921 22:24:18.460690  283599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0921 22:24:18.467196  283599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10174.pem && ln -fs /usr/share/ca-certificates/10174.pem /etc/ssl/certs/10174.pem"
	I0921 22:24:18.474449  283599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10174.pem
	I0921 22:24:18.477401  283599 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Sep 21 21:32 /usr/share/ca-certificates/10174.pem
	I0921 22:24:18.477445  283599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10174.pem
	I0921 22:24:18.481956  283599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10174.pem /etc/ssl/certs/51391683.0"
	I0921 22:24:18.488418  283599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/101742.pem && ln -fs /usr/share/ca-certificates/101742.pem /etc/ssl/certs/101742.pem"
	I0921 22:24:18.495604  283599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/101742.pem
	I0921 22:24:18.498556  283599 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Sep 21 21:32 /usr/share/ca-certificates/101742.pem
	I0921 22:24:18.498600  283599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/101742.pem
	I0921 22:24:18.503245  283599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/101742.pem /etc/ssl/certs/3ec20f2e.0"
	I0921 22:24:18.509856  283599 kubeadm.go:396] StartCluster: {Name:default-k8s-different-port-20220921221118-10174 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:default-k8s-different-port-20220921221118-10174 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/
minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 22:24:18.509953  283599 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0921 22:24:18.509985  283599 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0921 22:24:18.533346  283599 cri.go:87] found id: "1058c41aafbb87ce5fc70eca7e064de03a01eb48e2da027b4515730997b03de5"
	I0921 22:24:18.533375  283599 cri.go:87] found id: "e1b3d54125fe2aff15e0a996ae0be62597db4065ff9a9d41a3b0e598b97b1608"
	I0921 22:24:18.533382  283599 cri.go:87] found id: "2654f64b12dee801006bf0c4b82dc6839422d58571d4bc74cee43fe7fdfe9b01"
	I0921 22:24:18.533388  283599 cri.go:87] found id: "1bb50adc50b7e339b9e7fee763126f6f17a621dcc78637bab78187b50e68bbf2"
	I0921 22:24:18.533393  283599 cri.go:87] found id: "9e931b83ea689a63f7a3861c1e0f7076f722e68890221c6209b05b3b852c12d7"
	I0921 22:24:18.533402  283599 cri.go:87] found id: "8a3d458869b0951d2fe3dd93e9d05a549b058f95f84b2ad0685292ebb3428767"
	I0921 22:24:18.533407  283599 cri.go:87] found id: ""
	I0921 22:24:18.533444  283599 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0921 22:24:18.545553  283599 cri.go:114] JSON = null
	W0921 22:24:18.545605  283599 kubeadm.go:403] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I0921 22:24:18.545686  283599 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0921 22:24:18.552635  283599 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I0921 22:24:18.552664  283599 kubeadm.go:627] restartCluster start
	I0921 22:24:18.552705  283599 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0921 22:24:18.558944  283599 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:18.559817  283599 kubeconfig.go:116] verify returned: extract IP: "default-k8s-different-port-20220921221118-10174" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
	I0921 22:24:18.560296  283599 kubeconfig.go:127] "default-k8s-different-port-20220921221118-10174" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig - will repair!
	I0921 22:24:18.561146  283599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig: {Name:mkdec5faf1bb182b50a3cd5458b352f38075dc15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:24:18.562655  283599 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0921 22:24:18.568841  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:18.568884  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:18.576584  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:18.776932  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:18.777023  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:18.786228  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:18.977461  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:18.977542  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:18.986186  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:19.177398  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:19.177487  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:19.186159  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:19.377453  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:19.377534  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:19.385921  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:19.577206  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:19.577296  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:19.586370  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:19.777572  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:19.777676  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:19.786797  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:19.977103  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:19.977188  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:19.985822  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:20.177132  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:20.177234  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:20.185876  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:20.377187  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:20.377298  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:20.386086  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:20.577399  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:20.577488  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:20.586142  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:20.777447  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:20.777527  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:20.786547  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:20.976769  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:20.976865  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:20.985682  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:21.176870  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:21.176951  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:21.185811  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:21.377116  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:21.377184  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:21.385829  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:21.577109  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:21.577202  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:21.585911  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:21.585933  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:21.585979  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:21.593866  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:21.593893  283599 kubeadm.go:602] needs reconfigure: apiserver error: timed out waiting for the condition
	I0921 22:24:21.593899  283599 kubeadm.go:1114] stopping kube-system containers ...
	I0921 22:24:21.593908  283599 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0921 22:24:21.593964  283599 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0921 22:24:21.618017  283599 cri.go:87] found id: "1058c41aafbb87ce5fc70eca7e064de03a01eb48e2da027b4515730997b03de5"
	I0921 22:24:21.618041  283599 cri.go:87] found id: "e1b3d54125fe2aff15e0a996ae0be62597db4065ff9a9d41a3b0e598b97b1608"
	I0921 22:24:21.618048  283599 cri.go:87] found id: "2654f64b12dee801006bf0c4b82dc6839422d58571d4bc74cee43fe7fdfe9b01"
	I0921 22:24:21.618058  283599 cri.go:87] found id: "1bb50adc50b7e339b9e7fee763126f6f17a621dcc78637bab78187b50e68bbf2"
	I0921 22:24:21.618064  283599 cri.go:87] found id: "9e931b83ea689a63f7a3861c1e0f7076f722e68890221c6209b05b3b852c12d7"
	I0921 22:24:21.618072  283599 cri.go:87] found id: "8a3d458869b0951d2fe3dd93e9d05a549b058f95f84b2ad0685292ebb3428767"
	I0921 22:24:21.618078  283599 cri.go:87] found id: ""
	I0921 22:24:21.618082  283599 cri.go:232] Stopping containers: [1058c41aafbb87ce5fc70eca7e064de03a01eb48e2da027b4515730997b03de5 e1b3d54125fe2aff15e0a996ae0be62597db4065ff9a9d41a3b0e598b97b1608 2654f64b12dee801006bf0c4b82dc6839422d58571d4bc74cee43fe7fdfe9b01 1bb50adc50b7e339b9e7fee763126f6f17a621dcc78637bab78187b50e68bbf2 9e931b83ea689a63f7a3861c1e0f7076f722e68890221c6209b05b3b852c12d7 8a3d458869b0951d2fe3dd93e9d05a549b058f95f84b2ad0685292ebb3428767]
	I0921 22:24:21.618118  283599 ssh_runner.go:195] Run: which crictl
	I0921 22:24:21.621347  283599 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop 1058c41aafbb87ce5fc70eca7e064de03a01eb48e2da027b4515730997b03de5 e1b3d54125fe2aff15e0a996ae0be62597db4065ff9a9d41a3b0e598b97b1608 2654f64b12dee801006bf0c4b82dc6839422d58571d4bc74cee43fe7fdfe9b01 1bb50adc50b7e339b9e7fee763126f6f17a621dcc78637bab78187b50e68bbf2 9e931b83ea689a63f7a3861c1e0f7076f722e68890221c6209b05b3b852c12d7 8a3d458869b0951d2fe3dd93e9d05a549b058f95f84b2ad0685292ebb3428767
	I0921 22:24:21.645531  283599 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0921 22:24:21.655622  283599 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0921 22:24:21.662408  283599 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Sep 21 22:11 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Sep 21 22:11 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2127 Sep 21 22:11 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Sep 21 22:11 /etc/kubernetes/scheduler.conf
	
	I0921 22:24:21.662459  283599 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0921 22:24:21.669029  283599 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0921 22:24:21.675699  283599 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0921 22:24:21.682316  283599 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:21.682358  283599 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0921 22:24:21.688501  283599 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0921 22:24:17.817867  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:19.818111  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:18.789856  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:21.289803  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:21.694928  283599 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:21.696684  283599 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0921 22:24:21.703329  283599 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0921 22:24:21.710109  283599 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0921 22:24:21.710132  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0921 22:24:21.757457  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0921 22:24:22.810948  283599 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.053458682s)
	I0921 22:24:22.810976  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0921 22:24:22.943243  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0921 22:24:22.995873  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0921 22:24:23.097694  283599 api_server.go:51] waiting for apiserver process to appear ...
	I0921 22:24:23.097766  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 22:24:23.608210  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 22:24:24.107567  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 22:24:24.187217  283599 api_server.go:71] duration metric: took 1.089523123s to wait for apiserver process to appear ...
	I0921 22:24:24.187296  283599 api_server.go:87] waiting for apiserver healthz status ...
	I0921 22:24:24.187323  283599 api_server.go:240] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I0921 22:24:24.187688  283599 api_server.go:256] stopped: https://192.168.85.2:8444/healthz: Get "https://192.168.85.2:8444/healthz": dial tcp 192.168.85.2:8444: connect: connection refused
	I0921 22:24:24.688449  283599 api_server.go:240] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I0921 22:24:22.317667  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:24.317872  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:23.789425  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:25.789684  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:27.790412  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:27.592182  283599 api_server.go:266] https://192.168.85.2:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0921 22:24:27.592315  283599 api_server.go:102] status: https://192.168.85.2:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0921 22:24:27.688579  283599 api_server.go:240] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I0921 22:24:27.694601  283599 api_server.go:266] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0921 22:24:27.694667  283599 api_server.go:102] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0921 22:24:28.187832  283599 api_server.go:240] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I0921 22:24:28.192979  283599 api_server.go:266] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0921 22:24:28.193004  283599 api_server.go:102] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0921 22:24:28.688623  283599 api_server.go:240] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I0921 22:24:28.695172  283599 api_server.go:266] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0921 22:24:28.695285  283599 api_server.go:102] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0921 22:24:29.187841  283599 api_server.go:240] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I0921 22:24:29.193157  283599 api_server.go:266] https://192.168.85.2:8444/healthz returned 200:
	ok
	I0921 22:24:29.198775  283599 api_server.go:140] control plane version: v1.25.2
	I0921 22:24:29.198796  283599 api_server.go:130] duration metric: took 5.011488882s to wait for apiserver health ...
	I0921 22:24:29.198805  283599 cni.go:95] Creating CNI manager for ""
	I0921 22:24:29.198812  283599 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0921 22:24:29.201314  283599 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0921 22:24:29.202798  283599 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0921 22:24:29.206616  283599 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.2/kubectl ...
	I0921 22:24:29.206636  283599 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0921 22:24:29.221913  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0921 22:24:29.826767  283599 system_pods.go:43] waiting for kube-system pods to appear ...
	I0921 22:24:29.834488  283599 system_pods.go:59] 9 kube-system pods found
	I0921 22:24:29.834517  283599 system_pods.go:61] "coredns-565d847f94-mrkjn" [7f364c47-74ce-4271-aab1-67bba320c586] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0921 22:24:29.834528  283599 system_pods.go:61] "etcd-default-k8s-different-port-20220921221118-10174" [8f0f58a7-7eae-43db-840f-bde95464e94e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0921 22:24:29.834533  283599 system_pods.go:61] "kindnet-7wbpp" [3f16ae0b-2f66-4f1e-b234-74570472a7f8] Running
	I0921 22:24:29.834539  283599 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220921221118-10174" [3a935d6b-ca77-4bcb-ae19-0a2af77c12a1] Running
	I0921 22:24:29.834544  283599 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220921221118-10174" [d01ee91a-5587-48e9-a235-68a73d5fedef] Running
	I0921 22:24:29.834549  283599 system_pods.go:61] "kube-proxy-lzphc" [611dbd37-0771-41b2-b886-93f46d79f802] Running
	I0921 22:24:29.834554  283599 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220921221118-10174" [998713da-f133-43f7-9f11-c6110ad66c8d] Running
	I0921 22:24:29.834561  283599 system_pods.go:61] "metrics-server-5c8fd5cf8-sshzh" [5972fae5-09c2-4e2e-b609-ef85f72311e4] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0921 22:24:29.834572  283599 system_pods.go:61] "storage-provisioner" [ca16dea1-fb3d-4cc1-b449-2236aefcc627] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0921 22:24:29.834577  283599 system_pods.go:74] duration metric: took 7.786123ms to wait for pod list to return data ...
	I0921 22:24:29.834588  283599 node_conditions.go:102] verifying NodePressure condition ...
	I0921 22:24:29.837059  283599 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0921 22:24:29.837085  283599 node_conditions.go:123] node cpu capacity is 8
	I0921 22:24:29.837096  283599 node_conditions.go:105] duration metric: took 2.500371ms to run NodePressure ...
	I0921 22:24:29.837121  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0921 22:24:30.025715  283599 kubeadm.go:763] waiting for restarted kubelet to initialise ...
	I0921 22:24:30.029542  283599 kubeadm.go:778] kubelet initialised
	I0921 22:24:30.029565  283599 kubeadm.go:779] duration metric: took 3.826857ms waiting for restarted kubelet to initialise ...
	I0921 22:24:30.029572  283599 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0921 22:24:30.034316  283599 pod_ready.go:78] waiting up to 4m0s for pod "coredns-565d847f94-mrkjn" in "kube-system" namespace to be "Ready" ...
	I0921 22:24:26.817684  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:29.317793  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:31.318001  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:30.289213  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:32.789335  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:32.039865  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:34.040511  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:36.539322  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:33.817371  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:35.817456  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:34.789530  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:37.289284  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:38.539700  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:41.040333  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:37.817967  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:40.318244  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:39.789967  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:42.289726  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:43.539636  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:45.540134  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:42.817716  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:44.818139  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:44.789355  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:47.288847  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:48.040425  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:50.539475  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:47.317825  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:49.318211  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:49.289182  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:51.289938  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:52.539590  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:54.540310  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:51.817491  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:53.818080  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:55.818165  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:53.789719  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:56.289013  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:57.040311  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:59.539775  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:58.318151  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:00.318254  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:58.289251  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:00.789124  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:02.789910  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:02.040207  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:04.540336  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:02.817283  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:04.817911  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:05.290121  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:07.789553  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:07.039774  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:09.039928  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:11.040136  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:07.318317  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:09.817957  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:10.289528  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:12.789110  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:13.540022  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:16.040513  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:12.317490  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:14.818433  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:14.789413  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:16.789947  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:18.539457  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:21.040423  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:17.317880  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:19.817565  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:19.289330  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:21.789335  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:23.539701  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:26.039677  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:22.317640  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:24.318075  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:23.789488  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:25.789726  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:28.539400  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:30.540154  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:26.817737  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:28.818270  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:31.318310  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:28.289323  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:30.789442  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:32.789667  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:33.039502  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:35.039801  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:33.318392  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:35.818247  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:34.790488  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:37.288758  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:37.539221  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:39.539681  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:41.539999  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:38.317564  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:40.317641  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:39.289052  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:41.789424  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:44.040284  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:46.540320  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:42.818080  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:45.317732  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:44.289331  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:46.789866  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:49.039837  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:51.540123  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:47.817565  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:49.314620  276511 pod_ready.go:81] duration metric: took 4m0.002300536s waiting for pod "coredns-565d847f94-m8xgt" in "kube-system" namespace to be "Ready" ...
	E0921 22:25:49.314670  276511 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-565d847f94-m8xgt" in "kube-system" namespace to be "Ready" (will not retry!)
	I0921 22:25:49.314692  276511 pod_ready.go:38] duration metric: took 4m0.007078344s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0921 22:25:49.314717  276511 kubeadm.go:631] restartCluster took 4m10.710033944s
	W0921 22:25:49.314858  276511 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0921 22:25:49.314887  276511 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0921 22:25:49.289362  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:51.789574  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:54.040292  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:56.540637  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:52.154431  276511 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (2.839517184s)
	I0921 22:25:52.154487  276511 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0921 22:25:52.163969  276511 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0921 22:25:52.170969  276511 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0921 22:25:52.171027  276511 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0921 22:25:52.177996  276511 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0921 22:25:52.178063  276511 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0921 22:25:52.213969  276511 kubeadm.go:317] W0921 22:25:52.213140    3321 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0921 22:25:52.246713  276511 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1017-gcp\n", err: exit status 1
	I0921 22:25:52.310910  276511 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0921 22:25:54.288796  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:56.289801  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:26:01.184243  276511 kubeadm.go:317] [init] Using Kubernetes version: v1.25.2
	I0921 22:26:01.184314  276511 kubeadm.go:317] [preflight] Running pre-flight checks
	I0921 22:26:01.184416  276511 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I0921 22:26:01.184507  276511 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1017-gcp
	I0921 22:26:01.184592  276511 kubeadm.go:317] OS: Linux
	I0921 22:26:01.184673  276511 kubeadm.go:317] CGROUPS_CPU: enabled
	I0921 22:26:01.184737  276511 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I0921 22:26:01.184793  276511 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I0921 22:26:01.184856  276511 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I0921 22:26:01.184921  276511 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I0921 22:26:01.184985  276511 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I0921 22:26:01.185046  276511 kubeadm.go:317] CGROUPS_PIDS: enabled
	I0921 22:26:01.185099  276511 kubeadm.go:317] CGROUPS_HUGETLB: enabled
	I0921 22:26:01.185157  276511 kubeadm.go:317] CGROUPS_BLKIO: enabled
	I0921 22:26:01.185254  276511 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0921 22:26:01.185380  276511 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0921 22:26:01.185526  276511 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0921 22:26:01.185623  276511 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0921 22:26:01.187463  276511 out.go:204]   - Generating certificates and keys ...
	I0921 22:26:01.187540  276511 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0921 22:26:01.187594  276511 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0921 22:26:01.187659  276511 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0921 22:26:01.187785  276511 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I0921 22:26:01.187900  276511 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I0921 22:26:01.187958  276511 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I0921 22:26:01.188014  276511 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I0921 22:26:01.188086  276511 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I0921 22:26:01.188221  276511 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0921 22:26:01.188336  276511 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0921 22:26:01.188409  276511 kubeadm.go:317] [certs] Using the existing "sa" key
	I0921 22:26:01.188488  276511 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0921 22:26:01.188556  276511 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0921 22:26:01.188636  276511 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0921 22:26:01.188731  276511 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0921 22:26:01.188817  276511 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0921 22:26:01.188953  276511 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0921 22:26:01.189087  276511 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0921 22:26:01.189191  276511 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I0921 22:26:01.189310  276511 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0921 22:26:01.191284  276511 out.go:204]   - Booting up control plane ...
	I0921 22:26:01.191385  276511 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0921 22:26:01.191486  276511 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0921 22:26:01.191561  276511 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0921 22:26:01.191748  276511 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0921 22:26:01.191985  276511 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0921 22:26:01.192105  276511 kubeadm.go:317] [apiclient] All control plane components are healthy after 6.503275 seconds
	I0921 22:26:01.192289  276511 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0921 22:26:01.192460  276511 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0921 22:26:01.192545  276511 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs
	I0921 22:26:01.192839  276511 kubeadm.go:317] [mark-control-plane] Marking the node no-preload-20220921220832-10174 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0921 22:26:01.192906  276511 kubeadm.go:317] [bootstrap-token] Using token: 9ldpwz.b05pw96cyce3l1nr
	I0921 22:26:01.194593  276511 out.go:204]   - Configuring RBAC rules ...
	I0921 22:26:01.194724  276511 kubeadm.go:317] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0921 22:26:01.194852  276511 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0921 22:26:01.195058  276511 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0921 22:26:01.195234  276511 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0921 22:26:01.195387  276511 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0921 22:26:01.195500  276511 kubeadm.go:317] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0921 22:26:01.195644  276511 kubeadm.go:317] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0921 22:26:01.195703  276511 kubeadm.go:317] [addons] Applied essential addon: CoreDNS
	I0921 22:26:01.195765  276511 kubeadm.go:317] [addons] Applied essential addon: kube-proxy
	I0921 22:26:01.195777  276511 kubeadm.go:317] 
	I0921 22:26:01.195861  276511 kubeadm.go:317] Your Kubernetes control-plane has initialized successfully!
	I0921 22:26:01.195872  276511 kubeadm.go:317] 
	I0921 22:26:01.195980  276511 kubeadm.go:317] To start using your cluster, you need to run the following as a regular user:
	I0921 22:26:01.196004  276511 kubeadm.go:317] 
	I0921 22:26:01.196036  276511 kubeadm.go:317]   mkdir -p $HOME/.kube
	I0921 22:26:01.196117  276511 kubeadm.go:317]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0921 22:26:01.196194  276511 kubeadm.go:317]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0921 22:26:01.196207  276511 kubeadm.go:317] 
	I0921 22:26:01.196286  276511 kubeadm.go:317] Alternatively, if you are the root user, you can run:
	I0921 22:26:01.196303  276511 kubeadm.go:317] 
	I0921 22:26:01.196379  276511 kubeadm.go:317]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0921 22:26:01.196404  276511 kubeadm.go:317] 
	I0921 22:26:01.196485  276511 kubeadm.go:317] You should now deploy a pod network to the cluster.
	I0921 22:26:01.196595  276511 kubeadm.go:317] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0921 22:26:01.196694  276511 kubeadm.go:317]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0921 22:26:01.196706  276511 kubeadm.go:317] 
	I0921 22:26:01.196820  276511 kubeadm.go:317] You can now join any number of control-plane nodes by copying certificate authorities
	I0921 22:26:01.196920  276511 kubeadm.go:317] and service account keys on each node and then running the following as root:
	I0921 22:26:01.196931  276511 kubeadm.go:317] 
	I0921 22:26:01.197032  276511 kubeadm.go:317]   kubeadm join control-plane.minikube.internal:8443 --token 9ldpwz.b05pw96cyce3l1nr \
	I0921 22:26:01.197181  276511 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:b419e756666ce2e30bdac31039f21ad7242d26e2b9c03a55c5bc6fe8dcfddcf7 \
	I0921 22:26:01.197220  276511 kubeadm.go:317] 	--control-plane 
	I0921 22:26:01.197231  276511 kubeadm.go:317] 
	I0921 22:26:01.197362  276511 kubeadm.go:317] Then you can join any number of worker nodes by running the following on each as root:
	I0921 22:26:01.197381  276511 kubeadm.go:317] 
	I0921 22:26:01.197495  276511 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8443 --token 9ldpwz.b05pw96cyce3l1nr \
	I0921 22:26:01.197628  276511 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:b419e756666ce2e30bdac31039f21ad7242d26e2b9c03a55c5bc6fe8dcfddcf7 
	I0921 22:26:01.197660  276511 cni.go:95] Creating CNI manager for ""
	I0921 22:26:01.197674  276511 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0921 22:26:01.199797  276511 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0921 22:25:59.039749  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:01.040507  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:01.201405  276511 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0921 22:26:01.205181  276511 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.2/kubectl ...
	I0921 22:26:01.205199  276511 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0921 22:26:01.218971  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0921 22:25:58.789397  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:26:00.789911  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:26:03.540344  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:06.039881  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:02.006490  276511 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0921 22:26:02.006560  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:02.006575  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl label nodes minikube.k8s.io/version=v1.27.0 minikube.k8s.io/commit=937c68716dfaac5b5ffa3b6655158d5d3472b8c4 minikube.k8s.io/name=no-preload-20220921220832-10174 minikube.k8s.io/updated_at=2022_09_21T22_26_02_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:02.013858  276511 ops.go:34] apiserver oom_adj: -16
	I0921 22:26:02.099832  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:02.694112  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:03.194089  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:03.693535  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:04.193854  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:04.693713  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:05.194101  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:05.694288  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:06.193619  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:06.693501  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:03.289345  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:26:05.789183  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:26:08.040230  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:10.539463  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:07.193590  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:07.693901  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:08.194072  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:08.694197  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:09.193914  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:09.693488  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:10.194416  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:10.693496  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:11.194435  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:11.694097  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:08.289258  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:26:10.789536  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:26:12.790035  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:26:12.194461  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:12.694279  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:13.193818  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:13.693711  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:13.758985  276511 kubeadm.go:1067] duration metric: took 11.752476269s to wait for elevateKubeSystemPrivileges.
	I0921 22:26:13.759013  276511 kubeadm.go:398] StartCluster complete in 4m35.198807914s
	I0921 22:26:13.759030  276511 settings.go:142] acquiring lock: {Name:mk2e017b0c75e33bad2b3a546d00214b38a21694 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:26:13.759144  276511 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
	I0921 22:26:13.760661  276511 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig: {Name:mkdec5faf1bb182b50a3cd5458b352f38075dc15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:26:14.276964  276511 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "no-preload-20220921220832-10174" rescaled to 1
	I0921 22:26:14.277021  276511 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0921 22:26:14.277060  276511 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0921 22:26:14.279846  276511 out.go:177] * Verifying Kubernetes components...
	I0921 22:26:14.277154  276511 addons.go:412] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0921 22:26:14.277306  276511 config.go:180] Loaded profile config "no-preload-20220921220832-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.2
	I0921 22:26:14.281313  276511 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0921 22:26:14.281349  276511 addons.go:65] Setting default-storageclass=true in profile "no-preload-20220921220832-10174"
	I0921 22:26:14.281359  276511 addons.go:65] Setting storage-provisioner=true in profile "no-preload-20220921220832-10174"
	I0921 22:26:14.281373  276511 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-20220921220832-10174"
	I0921 22:26:14.281387  276511 addons.go:65] Setting metrics-server=true in profile "no-preload-20220921220832-10174"
	I0921 22:26:14.281397  276511 addons.go:65] Setting dashboard=true in profile "no-preload-20220921220832-10174"
	I0921 22:26:14.281436  276511 addons.go:153] Setting addon dashboard=true in "no-preload-20220921220832-10174"
	W0921 22:26:14.281450  276511 addons.go:162] addon dashboard should already be in state true
	I0921 22:26:14.281497  276511 host.go:66] Checking if "no-preload-20220921220832-10174" exists ...
	I0921 22:26:14.281400  276511 addons.go:153] Setting addon metrics-server=true in "no-preload-20220921220832-10174"
	W0921 22:26:14.281576  276511 addons.go:162] addon metrics-server should already be in state true
	I0921 22:26:14.281377  276511 addons.go:153] Setting addon storage-provisioner=true in "no-preload-20220921220832-10174"
	I0921 22:26:14.281640  276511 host.go:66] Checking if "no-preload-20220921220832-10174" exists ...
	W0921 22:26:14.281653  276511 addons.go:162] addon storage-provisioner should already be in state true
	I0921 22:26:14.281684  276511 host.go:66] Checking if "no-preload-20220921220832-10174" exists ...
	I0921 22:26:14.281727  276511 cli_runner.go:164] Run: docker container inspect no-preload-20220921220832-10174 --format={{.State.Status}}
	I0921 22:26:14.282004  276511 cli_runner.go:164] Run: docker container inspect no-preload-20220921220832-10174 --format={{.State.Status}}
	I0921 22:26:14.282138  276511 cli_runner.go:164] Run: docker container inspect no-preload-20220921220832-10174 --format={{.State.Status}}
	I0921 22:26:14.282139  276511 cli_runner.go:164] Run: docker container inspect no-preload-20220921220832-10174 --format={{.State.Status}}
	I0921 22:26:14.321366  276511 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0921 22:26:14.323218  276511 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0921 22:26:14.323243  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0921 22:26:14.323321  276511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220832-10174
	I0921 22:26:14.323433  276511 addons.go:153] Setting addon default-storageclass=true in "no-preload-20220921220832-10174"
	W0921 22:26:14.323452  276511 addons.go:162] addon default-storageclass should already be in state true
	I0921 22:26:14.323478  276511 host.go:66] Checking if "no-preload-20220921220832-10174" exists ...
	I0921 22:26:14.323995  276511 cli_runner.go:164] Run: docker container inspect no-preload-20220921220832-10174 --format={{.State.Status}}
	I0921 22:26:14.331074  276511 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.6.0
	I0921 22:26:14.333243  276511 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0921 22:26:14.335670  276511 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0921 22:26:14.335699  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0921 22:26:14.335828  276511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220832-10174
	I0921 22:26:14.338700  276511 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0921 22:26:12.540251  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:15.040305  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:14.339971  276511 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0921 22:26:14.339996  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0921 22:26:14.340067  276511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220832-10174
	I0921 22:26:14.357088  276511 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0921 22:26:14.357118  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0921 22:26:14.357179  276511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220832-10174
	I0921 22:26:14.363845  276511 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49438 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/no-preload-20220921220832-10174/id_rsa Username:docker}
	I0921 22:26:14.373248  276511 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49438 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/no-preload-20220921220832-10174/id_rsa Username:docker}
	I0921 22:26:14.374001  276511 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49438 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/no-preload-20220921220832-10174/id_rsa Username:docker}
	I0921 22:26:14.403584  276511 node_ready.go:35] waiting up to 6m0s for node "no-preload-20220921220832-10174" to be "Ready" ...
	I0921 22:26:14.403673  276511 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0921 22:26:14.403710  276511 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49438 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/no-preload-20220921220832-10174/id_rsa Username:docker}
	I0921 22:26:14.597706  276511 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0921 22:26:14.597740  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0921 22:26:14.598185  276511 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0921 22:26:14.598208  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0921 22:26:14.678717  276511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0921 22:26:14.691157  276511 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0921 22:26:14.691190  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0921 22:26:14.776824  276511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0921 22:26:14.780103  276511 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0921 22:26:14.780131  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0921 22:26:14.796772  276511 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0921 22:26:14.796802  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0921 22:26:14.877240  276511 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0921 22:26:14.877270  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0921 22:26:14.886529  276511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0921 22:26:14.982072  276511 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0921 22:26:14.982106  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4206 bytes)
	I0921 22:26:15.083042  276511 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0921 22:26:15.083073  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0921 22:26:15.185025  276511 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0921 22:26:15.185058  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0921 22:26:15.288358  276511 start.go:810] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS
	I0921 22:26:15.295798  276511 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0921 22:26:15.295830  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0921 22:26:15.390667  276511 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0921 22:26:15.390693  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0921 22:26:15.415462  276511 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0921 22:26:15.415496  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0921 22:26:15.492343  276511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0921 22:26:15.887638  276511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.208874575s)
	I0921 22:26:15.887703  276511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.110843194s)
	I0921 22:26:15.982100  276511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.095511944s)
	I0921 22:26:15.982142  276511 addons.go:383] Verifying addon metrics-server=true in "no-preload-20220921220832-10174"
	I0921 22:26:16.410487  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:16.706261  276511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.213866962s)
	I0921 22:26:16.708800  276511 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0921 22:26:16.709899  276511 addons.go:414] enableAddons completed in 2.432760887s
	I0921 22:26:15.290491  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:26:17.789818  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:26:17.539620  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:20.039549  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:18.911099  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:21.409684  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:20.289226  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:26:22.292776  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:26:22.292799  265259 node_ready.go:38] duration metric: took 4m0.017444735s waiting for node "embed-certs-20220921220439-10174" to be "Ready" ...
	I0921 22:26:22.294631  265259 out.go:177] 
	W0921 22:26:22.296115  265259 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0921 22:26:22.296143  265259 out.go:239] * 
	W0921 22:26:22.296927  265259 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0921 22:26:22.298511  265259 out.go:177] 
	I0921 22:26:22.539641  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:25.039622  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:23.410505  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:25.909606  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:27.539385  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:29.539878  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:31.540249  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:27.910578  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:30.410429  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:33.540339  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:35.541025  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:32.910296  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:34.911081  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:38.039663  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:40.539522  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:37.410360  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:39.410436  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:42.540000  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:45.040231  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:41.909862  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:43.910310  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:46.409644  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:47.540283  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:50.039510  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:48.410566  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:50.410732  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:52.039949  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:54.540144  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:52.910395  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:54.910495  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:57.039966  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:59.040209  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:01.539473  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:57.409907  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:59.410288  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:03.540044  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:06.040183  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:01.910153  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:04.409817  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:06.410562  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:08.040423  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:10.539873  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:08.910302  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:11.410571  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:13.039961  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:15.040246  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:13.909964  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:15.910369  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:17.539604  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:19.539765  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:18.410585  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:20.910125  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:22.040021  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:24.539835  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:26.540240  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:22.910441  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:25.410069  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:28.540555  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:31.039426  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:27.410438  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:29.410512  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:33.040327  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:35.040601  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:31.910290  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:34.409802  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:37.540256  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:40.039584  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:36.909982  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:39.409679  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:41.410245  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:42.539492  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:44.539613  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:46.540433  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:43.909863  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:45.910696  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:49.039750  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:51.040314  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:48.410147  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:50.410237  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:53.040407  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:55.540422  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:52.910535  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:55.410601  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:58.040486  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:28:00.540148  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:57.910322  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:59.910846  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:03.039402  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:28:05.040045  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:28:02.410370  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:04.410513  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:07.040112  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:28:09.539484  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:28:11.539916  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:28:06.910328  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:09.409926  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:11.410618  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:14.040357  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:28:16.040410  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:28:13.909830  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:15.910746  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:18.539390  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:28:20.539944  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:28:18.409773  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:20.410208  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:22.540064  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:28:25.039880  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:28:22.410702  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:24.909931  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:27.539325  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:28:29.540282  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:28:30.037464  283599 pod_ready.go:81] duration metric: took 4m0.003103432s waiting for pod "coredns-565d847f94-mrkjn" in "kube-system" namespace to be "Ready" ...
	E0921 22:28:30.037491  283599 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-565d847f94-mrkjn" in "kube-system" namespace to be "Ready" (will not retry!)
	I0921 22:28:30.037512  283599 pod_ready.go:38] duration metric: took 4m0.007931264s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0921 22:28:30.037542  283599 kubeadm.go:631] restartCluster took 4m11.484871611s
	W0921 22:28:30.037694  283599 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0921 22:28:30.037731  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0921 22:28:26.910183  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:28.910722  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:31.410255  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:32.836415  283599 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (2.798662315s)
	I0921 22:28:32.836470  283599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0921 22:28:32.846281  283599 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0921 22:28:32.853286  283599 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0921 22:28:32.853347  283599 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0921 22:28:32.860321  283599 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0921 22:28:32.860372  283599 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0921 22:28:32.899444  283599 kubeadm.go:317] [init] Using Kubernetes version: v1.25.2
	I0921 22:28:32.899530  283599 kubeadm.go:317] [preflight] Running pre-flight checks
	I0921 22:28:32.927597  283599 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I0921 22:28:32.927684  283599 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1017-gcp
	I0921 22:28:32.927762  283599 kubeadm.go:317] OS: Linux
	I0921 22:28:32.927817  283599 kubeadm.go:317] CGROUPS_CPU: enabled
	I0921 22:28:32.927857  283599 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I0921 22:28:32.927895  283599 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I0921 22:28:32.927957  283599 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I0921 22:28:32.928004  283599 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I0921 22:28:32.928045  283599 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I0921 22:28:32.928083  283599 kubeadm.go:317] CGROUPS_PIDS: enabled
	I0921 22:28:32.928121  283599 kubeadm.go:317] CGROUPS_HUGETLB: enabled
	I0921 22:28:32.928158  283599 kubeadm.go:317] CGROUPS_BLKIO: enabled
	I0921 22:28:32.994267  283599 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0921 22:28:32.994393  283599 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0921 22:28:32.994471  283599 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0921 22:28:33.113433  283599 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0921 22:28:33.118993  283599 out.go:204]   - Generating certificates and keys ...
	I0921 22:28:33.119145  283599 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0921 22:28:33.119247  283599 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0921 22:28:33.119310  283599 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0921 22:28:33.119362  283599 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I0921 22:28:33.119432  283599 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I0921 22:28:33.119501  283599 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I0921 22:28:33.119554  283599 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I0921 22:28:33.119605  283599 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I0921 22:28:33.119666  283599 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0921 22:28:33.119759  283599 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0921 22:28:33.119797  283599 kubeadm.go:317] [certs] Using the existing "sa" key
	I0921 22:28:33.119873  283599 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0921 22:28:33.240892  283599 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0921 22:28:33.319256  283599 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0921 22:28:33.514290  283599 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0921 22:28:33.579294  283599 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0921 22:28:33.591185  283599 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0921 22:28:33.591951  283599 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0921 22:28:33.592077  283599 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I0921 22:28:33.671909  283599 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0921 22:28:33.674209  283599 out.go:204]   - Booting up control plane ...
	I0921 22:28:33.674356  283599 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0921 22:28:33.674478  283599 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0921 22:28:33.675328  283599 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0921 22:28:33.677339  283599 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0921 22:28:33.679453  283599 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0921 22:28:33.410335  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:35.410708  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:40.182528  283599 kubeadm.go:317] [apiclient] All control plane components are healthy after 6.502979 seconds
	I0921 22:28:40.182719  283599 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0921 22:28:40.191775  283599 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0921 22:28:40.708308  283599 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs
	I0921 22:28:40.708506  283599 kubeadm.go:317] [mark-control-plane] Marking the node default-k8s-different-port-20220921221118-10174 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0921 22:28:41.216221  283599 kubeadm.go:317] [bootstrap-token] Using token: 7zktge.i7kw817sdpmpqput
	I0921 22:28:41.217917  283599 out.go:204]   - Configuring RBAC rules ...
	I0921 22:28:41.218062  283599 kubeadm.go:317] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0921 22:28:41.221048  283599 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0921 22:28:41.225663  283599 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0921 22:28:41.227873  283599 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0921 22:28:41.229840  283599 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0921 22:28:41.231693  283599 kubeadm.go:317] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0921 22:28:41.238509  283599 kubeadm.go:317] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0921 22:28:41.448788  283599 kubeadm.go:317] [addons] Applied essential addon: CoreDNS
	I0921 22:28:41.684596  283599 kubeadm.go:317] [addons] Applied essential addon: kube-proxy
	I0921 22:28:41.686021  283599 kubeadm.go:317] 
	I0921 22:28:41.686112  283599 kubeadm.go:317] Your Kubernetes control-plane has initialized successfully!
	I0921 22:28:41.686121  283599 kubeadm.go:317] 
	I0921 22:28:41.686213  283599 kubeadm.go:317] To start using your cluster, you need to run the following as a regular user:
	I0921 22:28:41.686221  283599 kubeadm.go:317] 
	I0921 22:28:41.686253  283599 kubeadm.go:317]   mkdir -p $HOME/.kube
	I0921 22:28:41.687200  283599 kubeadm.go:317]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0921 22:28:41.687275  283599 kubeadm.go:317]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0921 22:28:41.687282  283599 kubeadm.go:317] 
	I0921 22:28:41.687347  283599 kubeadm.go:317] Alternatively, if you are the root user, you can run:
	I0921 22:28:41.687361  283599 kubeadm.go:317] 
	I0921 22:28:41.687420  283599 kubeadm.go:317]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0921 22:28:41.687443  283599 kubeadm.go:317] 
	I0921 22:28:41.687516  283599 kubeadm.go:317] You should now deploy a pod network to the cluster.
	I0921 22:28:41.687626  283599 kubeadm.go:317] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0921 22:28:41.687754  283599 kubeadm.go:317]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0921 22:28:41.687768  283599 kubeadm.go:317] 
	I0921 22:28:41.687856  283599 kubeadm.go:317] You can now join any number of control-plane nodes by copying certificate authorities
	I0921 22:28:41.687945  283599 kubeadm.go:317] and service account keys on each node and then running the following as root:
	I0921 22:28:41.687952  283599 kubeadm.go:317] 
	I0921 22:28:41.688054  283599 kubeadm.go:317]   kubeadm join control-plane.minikube.internal:8444 --token 7zktge.i7kw817sdpmpqput \
	I0921 22:28:41.688176  283599 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:b419e756666ce2e30bdac31039f21ad7242d26e2b9c03a55c5bc6fe8dcfddcf7 \
	I0921 22:28:41.688202  283599 kubeadm.go:317] 	--control-plane 
	I0921 22:28:41.688207  283599 kubeadm.go:317] 
	I0921 22:28:41.688304  283599 kubeadm.go:317] Then you can join any number of worker nodes by running the following on each as root:
	I0921 22:28:41.688309  283599 kubeadm.go:317] 
	I0921 22:28:41.688403  283599 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8444 --token 7zktge.i7kw817sdpmpqput \
	I0921 22:28:41.688525  283599 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:b419e756666ce2e30bdac31039f21ad7242d26e2b9c03a55c5bc6fe8dcfddcf7 
	I0921 22:28:41.691473  283599 kubeadm.go:317] W0921 22:28:32.894416    3309 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0921 22:28:41.691806  283599 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1017-gcp\n", err: exit status 1
	I0921 22:28:41.691944  283599 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0921 22:28:41.691973  283599 cni.go:95] Creating CNI manager for ""
	I0921 22:28:41.691983  283599 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0921 22:28:41.694185  283599 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0921 22:28:37.910661  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:40.410644  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:41.695783  283599 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0921 22:28:41.699760  283599 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.2/kubectl ...
	I0921 22:28:41.699784  283599 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0921 22:28:41.776183  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0921 22:28:42.446104  283599 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0921 22:28:42.446180  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:42.446216  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl label nodes minikube.k8s.io/version=v1.27.0 minikube.k8s.io/commit=937c68716dfaac5b5ffa3b6655158d5d3472b8c4 minikube.k8s.io/name=default-k8s-different-port-20220921221118-10174 minikube.k8s.io/updated_at=2022_09_21T22_28_42_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:42.524814  283599 ops.go:34] apiserver oom_adj: -16
	I0921 22:28:42.524918  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:43.099884  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:43.600017  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:44.099303  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:44.599933  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:45.100173  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:45.599961  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:46.099843  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:46.599840  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:42.910093  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:44.910463  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:47.099465  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:47.599512  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:48.099998  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:48.599598  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:49.099840  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:49.599433  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:50.099931  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:50.599355  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:51.099363  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:51.599865  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:47.410019  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:49.410428  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:51.410461  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:52.099400  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:52.600056  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:53.100255  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:53.599772  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:53.668975  283599 kubeadm.go:1067] duration metric: took 11.222848116s to wait for elevateKubeSystemPrivileges.
	I0921 22:28:53.669016  283599 kubeadm.go:398] StartCluster complete in 4m35.159165946s
	I0921 22:28:53.669039  283599 settings.go:142] acquiring lock: {Name:mk2e017b0c75e33bad2b3a546d00214b38a21694 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:28:53.669157  283599 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
	I0921 22:28:53.670820  283599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig: {Name:mkdec5faf1bb182b50a3cd5458b352f38075dc15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:28:54.187769  283599 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20220921221118-10174" rescaled to 1
	I0921 22:28:54.187839  283599 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0921 22:28:54.187870  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0921 22:28:54.187894  283599 addons.go:412] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0921 22:28:54.190631  283599 out.go:177] * Verifying Kubernetes components...
	I0921 22:28:54.187957  283599 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-different-port-20220921221118-10174"
	I0921 22:28:54.187964  283599 addons.go:65] Setting dashboard=true in profile "default-k8s-different-port-20220921221118-10174"
	I0921 22:28:54.187970  283599 addons.go:65] Setting default-storageclass=true in profile "default-k8s-different-port-20220921221118-10174"
	I0921 22:28:54.188002  283599 addons.go:65] Setting metrics-server=true in profile "default-k8s-different-port-20220921221118-10174"
	I0921 22:28:54.188076  283599 config.go:180] Loaded profile config "default-k8s-different-port-20220921221118-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.2
	I0921 22:28:54.192035  283599 addons.go:153] Setting addon storage-provisioner=true in "default-k8s-different-port-20220921221118-10174"
	W0921 22:28:54.192079  283599 addons.go:162] addon storage-provisioner should already be in state true
	I0921 22:28:54.192091  283599 addons.go:153] Setting addon dashboard=true in "default-k8s-different-port-20220921221118-10174"
	W0921 22:28:54.192114  283599 addons.go:162] addon dashboard should already be in state true
	I0921 22:28:54.192162  283599 host.go:66] Checking if "default-k8s-different-port-20220921221118-10174" exists ...
	I0921 22:28:54.192210  283599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0921 22:28:54.192299  283599 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20220921221118-10174"
	I0921 22:28:54.192580  283599 addons.go:153] Setting addon metrics-server=true in "default-k8s-different-port-20220921221118-10174"
	W0921 22:28:54.192616  283599 addons.go:162] addon metrics-server should already be in state true
	I0921 22:28:54.192633  283599 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221118-10174 --format={{.State.Status}}
	I0921 22:28:54.192666  283599 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221118-10174 --format={{.State.Status}}
	I0921 22:28:54.192163  283599 host.go:66] Checking if "default-k8s-different-port-20220921221118-10174" exists ...
	I0921 22:28:54.192666  283599 host.go:66] Checking if "default-k8s-different-port-20220921221118-10174" exists ...
	I0921 22:28:54.193362  283599 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221118-10174 --format={{.State.Status}}
	I0921 22:28:54.193439  283599 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221118-10174 --format={{.State.Status}}
	I0921 22:28:54.234974  283599 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0921 22:28:54.236667  283599 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0921 22:28:54.236692  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0921 22:28:54.236745  283599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:28:54.240000  283599 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.6.0
	I0921 22:28:54.239390  283599 addons.go:153] Setting addon default-storageclass=true in "default-k8s-different-port-20220921221118-10174"
	I0921 22:28:54.241874  283599 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0921 22:28:54.244335  283599 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0921 22:28:54.244363  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	W0921 22:28:54.241874  283599 addons.go:162] addon default-storageclass should already be in state true
	I0921 22:28:54.244424  283599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:28:54.244454  283599 host.go:66] Checking if "default-k8s-different-port-20220921221118-10174" exists ...
	I0921 22:28:54.244956  283599 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221118-10174 --format={{.State.Status}}
	I0921 22:28:54.246658  283599 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0921 22:28:54.248082  283599 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0921 22:28:54.248109  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0921 22:28:54.248165  283599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:28:54.272909  283599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49443 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa Username:docker}
	I0921 22:28:54.273873  283599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49443 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa Username:docker}
	I0921 22:28:54.277163  283599 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0921 22:28:54.277186  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0921 22:28:54.277236  283599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:28:54.290041  283599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49443 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa Username:docker}
	I0921 22:28:54.318706  283599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49443 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa Username:docker}
	I0921 22:28:54.398932  283599 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20220921221118-10174" to be "Ready" ...
	I0921 22:28:54.399014  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0921 22:28:54.496523  283599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0921 22:28:54.498431  283599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0921 22:28:54.499591  283599 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0921 22:28:54.499650  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0921 22:28:54.501640  283599 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0921 22:28:54.501663  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0921 22:28:54.594519  283599 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0921 22:28:54.594561  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0921 22:28:54.596768  283599 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0921 22:28:54.596847  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0921 22:28:54.690036  283599 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0921 22:28:54.690071  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0921 22:28:54.700119  283599 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0921 22:28:54.700197  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0921 22:28:54.876320  283599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0921 22:28:54.883544  283599 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0921 22:28:54.883571  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4206 bytes)
	I0921 22:28:54.977006  283599 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0921 22:28:54.977040  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0921 22:28:55.079240  283599 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0921 22:28:55.079273  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0921 22:28:55.176309  283599 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0921 22:28:55.176344  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0921 22:28:55.276282  283599 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0921 22:28:55.276317  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0921 22:28:55.379016  283599 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0921 22:28:55.379044  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0921 22:28:55.386242  283599 start.go:810] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS
	I0921 22:28:55.399129  283599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0921 22:28:55.595061  283599 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.098450009s)
	I0921 22:28:55.786581  283599 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.288109437s)
	I0921 22:28:56.081753  283599 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.205376891s)
	I0921 22:28:56.081804  283599 addons.go:383] Verifying addon metrics-server=true in "default-k8s-different-port-20220921221118-10174"
	I0921 22:28:56.387178  283599 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0921 22:28:56.388690  283599 addons.go:414] enableAddons completed in 2.200797183s
	I0921 22:28:56.404853  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:28:53.909716  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:55.910611  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:58.405031  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:00.405582  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:28:58.409630  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:00.410447  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:02.905572  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:05.405473  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:02.910338  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:05.410066  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:07.904364  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:09.905589  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:07.910279  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:10.410127  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:12.405034  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:14.905741  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:12.910452  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:15.410553  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:17.404952  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:19.405175  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:21.405392  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:17.910479  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:20.410559  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:23.405620  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:25.905592  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:22.909898  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:24.910567  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:27.905775  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:30.405483  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:27.410039  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:29.410131  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:31.410291  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:32.904863  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:35.404709  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:33.910459  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:36.410445  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:37.905690  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:40.405229  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:38.910532  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:41.409671  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:42.905360  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:44.905907  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:43.410422  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:45.910511  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:47.404631  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:49.405402  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:48.409951  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:50.410363  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:51.904997  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:53.905667  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:56.405228  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:52.411261  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:54.910318  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:58.405705  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:00.905348  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:57.409683  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:59.410335  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:30:03.404779  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:05.404833  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:01.909994  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:30:03.910230  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:30:06.410036  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:30:07.405804  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:09.904912  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:08.909550  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:30:10.910475  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:30:13.409889  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:30:14.413229  276511 node_ready.go:38] duration metric: took 4m0.009606009s waiting for node "no-preload-20220921220832-10174" to be "Ready" ...
	I0921 22:30:14.416209  276511 out.go:177] 
	W0921 22:30:14.417896  276511 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0921 22:30:14.417916  276511 out.go:239] * 
	W0921 22:30:14.418711  276511 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0921 22:30:14.420798  276511 out.go:177] 
	I0921 22:30:11.905117  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:13.905422  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:15.906020  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:18.404644  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:20.404682  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:22.405540  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:24.905233  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:27.404679  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:29.904692  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:31.905266  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:34.405088  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:36.405476  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:38.904414  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:40.905386  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:43.404507  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:45.405356  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:47.904571  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:50.405311  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:52.904564  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:54.905119  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:57.405076  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:59.405121  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:01.904816  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:03.905408  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:05.905565  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:08.404718  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:10.405173  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:12.905041  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:14.905498  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:17.405656  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:19.905667  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:22.405514  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:24.904738  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:27.404689  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:29.405353  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:31.904926  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:34.405471  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:36.905606  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:39.404550  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:41.405513  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:43.905655  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:46.405308  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:48.405699  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:50.905270  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:53.405205  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:55.405540  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:57.905798  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:00.405370  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:02.405480  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:04.904649  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:06.905338  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:09.404845  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:11.405472  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:13.905469  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:16.405211  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:18.405365  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:20.904698  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:23.405458  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:25.905299  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:27.905466  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:29.905633  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:32.404583  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:34.404795  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:36.405323  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:38.405395  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:40.904581  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:42.905533  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:45.405100  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:47.405337  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:49.405417  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:51.905042  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:54.404654  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:54.406831  283599 node_ready.go:38] duration metric: took 4m0.00786279s waiting for node "default-k8s-different-port-20220921221118-10174" to be "Ready" ...
	I0921 22:32:54.409456  283599 out.go:177] 
	W0921 22:32:54.411031  283599 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0921 22:32:54.411055  283599 out.go:239] * 
	W0921 22:32:54.411890  283599 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0921 22:32:54.413449  283599 out.go:177] 
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	667a6666662eb       d921cee849482       About a minute ago   Running             kindnet-cni               1                   822a3d4f3d26d
	3b59972efe126       d921cee849482       4 minutes ago        Exited              kindnet-cni               0                   822a3d4f3d26d
	118685bf1243c       1c7d8c51823b5       4 minutes ago        Running             kube-proxy                0                   aab630e852c3d
	943813747ec76       ca0ea1ee3cfd3       4 minutes ago        Running             kube-scheduler            2                   6d3cefcf67297
	be18d7989d5cc       dbfceb93c69b6       4 minutes ago        Running             kube-controller-manager   2                   68cd08f28ec26
	b70eedeefc82f       a8a176a5d5d69       4 minutes ago        Running             etcd                      2                   0f5414f375eea
	a2c10538d6c16       97801f8394908       4 minutes ago        Running             kube-apiserver            2                   741ea276ae553
	
	* 
	* ==> containerd <==
	* -- Logs begin at Wed 2022-09-21 22:24:02 UTC, end at Wed 2022-09-21 22:32:55 UTC. --
	Sep 21 22:28:54 default-k8s-different-port-20220921221118-10174 containerd[386]: time="2022-09-21T22:28:54.299280606Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 21 22:28:54 default-k8s-different-port-20220921221118-10174 containerd[386]: time="2022-09-21T22:28:54.299321167Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 21 22:28:54 default-k8s-different-port-20220921221118-10174 containerd[386]: time="2022-09-21T22:28:54.299617393Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/aab630e852c3dc0f2ed1be0f1234af9f64db3f29dbebb23171eb5ba32e5e7f05 pid=4246 runtime=io.containerd.runc.v2
	Sep 21 22:28:54 default-k8s-different-port-20220921221118-10174 containerd[386]: time="2022-09-21T22:28:54.498223346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bd9q4,Uid:dfdbfd18-3fe6-4222-9570-1f1febe969ba,Namespace:kube-system,Attempt:0,} returns sandbox id \"aab630e852c3dc0f2ed1be0f1234af9f64db3f29dbebb23171eb5ba32e5e7f05\""
	Sep 21 22:28:54 default-k8s-different-port-20220921221118-10174 containerd[386]: time="2022-09-21T22:28:54.576185328Z" level=info msg="CreateContainer within sandbox \"aab630e852c3dc0f2ed1be0f1234af9f64db3f29dbebb23171eb5ba32e5e7f05\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
	Sep 21 22:28:54 default-k8s-different-port-20220921221118-10174 containerd[386]: time="2022-09-21T22:28:54.595984795Z" level=info msg="CreateContainer within sandbox \"aab630e852c3dc0f2ed1be0f1234af9f64db3f29dbebb23171eb5ba32e5e7f05\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"118685bf1243c6ac8c16eec8c30f295521220e8fcf17757f6f81f9e1c5272837\""
	Sep 21 22:28:54 default-k8s-different-port-20220921221118-10174 containerd[386]: time="2022-09-21T22:28:54.596861556Z" level=info msg="StartContainer for \"118685bf1243c6ac8c16eec8c30f295521220e8fcf17757f6f81f9e1c5272837\""
	Sep 21 22:28:54 default-k8s-different-port-20220921221118-10174 containerd[386]: time="2022-09-21T22:28:54.784428140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-ngxwf,Uid:48e9bf2d-5096-4913-b521-cbc3b0acc973,Namespace:kube-system,Attempt:0,} returns sandbox id \"822a3d4f3d26d63ab953201b14a329bbcb111b9188a5ef241a4a7c362712ff08\""
	Sep 21 22:28:54 default-k8s-different-port-20220921221118-10174 containerd[386]: time="2022-09-21T22:28:54.789519177Z" level=info msg="CreateContainer within sandbox \"822a3d4f3d26d63ab953201b14a329bbcb111b9188a5ef241a4a7c362712ff08\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:0,}"
	Sep 21 22:28:54 default-k8s-different-port-20220921221118-10174 containerd[386]: time="2022-09-21T22:28:54.891563605Z" level=info msg="CreateContainer within sandbox \"822a3d4f3d26d63ab953201b14a329bbcb111b9188a5ef241a4a7c362712ff08\" for &ContainerMetadata{Name:kindnet-cni,Attempt:0,} returns container id \"3b59972efe126124836d07dd686baceb64dbbf348cc964a10b75be0d06e64c90\""
	Sep 21 22:28:54 default-k8s-different-port-20220921221118-10174 containerd[386]: time="2022-09-21T22:28:54.894162768Z" level=info msg="StartContainer for \"3b59972efe126124836d07dd686baceb64dbbf348cc964a10b75be0d06e64c90\""
	Sep 21 22:28:54 default-k8s-different-port-20220921221118-10174 containerd[386]: time="2022-09-21T22:28:54.985660692Z" level=info msg="StartContainer for \"118685bf1243c6ac8c16eec8c30f295521220e8fcf17757f6f81f9e1c5272837\" returns successfully"
	Sep 21 22:28:55 default-k8s-different-port-20220921221118-10174 containerd[386]: time="2022-09-21T22:28:55.294949301Z" level=info msg="StartContainer for \"3b59972efe126124836d07dd686baceb64dbbf348cc964a10b75be0d06e64c90\" returns successfully"
	Sep 21 22:29:41 default-k8s-different-port-20220921221118-10174 containerd[386]: time="2022-09-21T22:29:41.499191310Z" level=error msg="ContainerStatus for \"2b5d6bd43b5205613e89f0ea239f381e8cbb3ec68e64da923811614d9cb2062a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2b5d6bd43b5205613e89f0ea239f381e8cbb3ec68e64da923811614d9cb2062a\": not found"
	Sep 21 22:29:41 default-k8s-different-port-20220921221118-10174 containerd[386]: time="2022-09-21T22:29:41.499808699Z" level=error msg="ContainerStatus for \"a0f12d58000788fc54f28f4e1cd2489de627fd6ee42e32fd0ba5d2877dc4789a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a0f12d58000788fc54f28f4e1cd2489de627fd6ee42e32fd0ba5d2877dc4789a\": not found"
	Sep 21 22:29:41 default-k8s-different-port-20220921221118-10174 containerd[386]: time="2022-09-21T22:29:41.500304947Z" level=error msg="ContainerStatus for \"e1b3d54125fe2aff15e0a996ae0be62597db4065ff9a9d41a3b0e598b97b1608\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e1b3d54125fe2aff15e0a996ae0be62597db4065ff9a9d41a3b0e598b97b1608\": not found"
	Sep 21 22:29:41 default-k8s-different-port-20220921221118-10174 containerd[386]: time="2022-09-21T22:29:41.500799562Z" level=error msg="ContainerStatus for \"f011002ac4e70aab04f4c174855a4bc3545bbb2ef36828bff1fc20faf237cd89\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f011002ac4e70aab04f4c174855a4bc3545bbb2ef36828bff1fc20faf237cd89\": not found"
	Sep 21 22:31:35 default-k8s-different-port-20220921221118-10174 containerd[386]: time="2022-09-21T22:31:35.826211510Z" level=info msg="shim disconnected" id=3b59972efe126124836d07dd686baceb64dbbf348cc964a10b75be0d06e64c90
	Sep 21 22:31:35 default-k8s-different-port-20220921221118-10174 containerd[386]: time="2022-09-21T22:31:35.826287879Z" level=warning msg="cleaning up after shim disconnected" id=3b59972efe126124836d07dd686baceb64dbbf348cc964a10b75be0d06e64c90 namespace=k8s.io
	Sep 21 22:31:35 default-k8s-different-port-20220921221118-10174 containerd[386]: time="2022-09-21T22:31:35.826306602Z" level=info msg="cleaning up dead shim"
	Sep 21 22:31:35 default-k8s-different-port-20220921221118-10174 containerd[386]: time="2022-09-21T22:31:35.836632827Z" level=warning msg="cleanup warnings time=\"2022-09-21T22:31:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4760 runtime=io.containerd.runc.v2\n"
	Sep 21 22:31:36 default-k8s-different-port-20220921221118-10174 containerd[386]: time="2022-09-21T22:31:36.187251165Z" level=info msg="CreateContainer within sandbox \"822a3d4f3d26d63ab953201b14a329bbcb111b9188a5ef241a4a7c362712ff08\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:1,}"
	Sep 21 22:31:36 default-k8s-different-port-20220921221118-10174 containerd[386]: time="2022-09-21T22:31:36.200260464Z" level=info msg="CreateContainer within sandbox \"822a3d4f3d26d63ab953201b14a329bbcb111b9188a5ef241a4a7c362712ff08\" for &ContainerMetadata{Name:kindnet-cni,Attempt:1,} returns container id \"667a6666662eb715245e6c49d408f391a61521ef6565e10b53d38b8ac51997e6\""
	Sep 21 22:31:36 default-k8s-different-port-20220921221118-10174 containerd[386]: time="2022-09-21T22:31:36.200775808Z" level=info msg="StartContainer for \"667a6666662eb715245e6c49d408f391a61521ef6565e10b53d38b8ac51997e6\""
	Sep 21 22:31:36 default-k8s-different-port-20220921221118-10174 containerd[386]: time="2022-09-21T22:31:36.380798794Z" level=info msg="StartContainer for \"667a6666662eb715245e6c49d408f391a61521ef6565e10b53d38b8ac51997e6\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-different-port-20220921221118-10174
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-different-port-20220921221118-10174
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=937c68716dfaac5b5ffa3b6655158d5d3472b8c4
	                    minikube.k8s.io/name=default-k8s-different-port-20220921221118-10174
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_09_21T22_28_42_0700
	                    minikube.k8s.io/version=v1.27.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 21 Sep 2022 22:28:38 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-different-port-20220921221118-10174
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 21 Sep 2022 22:32:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 21 Sep 2022 22:28:51 +0000   Wed, 21 Sep 2022 22:28:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 21 Sep 2022 22:28:51 +0000   Wed, 21 Sep 2022 22:28:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 21 Sep 2022 22:28:51 +0000   Wed, 21 Sep 2022 22:28:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 21 Sep 2022 22:28:51 +0000   Wed, 21 Sep 2022 22:28:35 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-different-port-20220921221118-10174
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871748Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871748Ki
	  pods:               110
	System Info:
	  Machine ID:                 40026c506a9948afaae3b0fda0a61c83
	  System UUID:                15db467d-fd65-4163-8719-8617da0ee9c6
	  Boot ID:                    878f3e16-9143-42f3-b848-1a740c7f26bf
	  Kernel Version:             5.15.0-1017-gcp
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.6.8
	  Kubelet Version:            v1.25.2
	  Kube-Proxy Version:         v1.25.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-default-k8s-different-port-20220921221118-10174                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         4m13s
	  kube-system                 kindnet-ngxwf                                                              100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m2s
	  kube-system                 kube-apiserver-default-k8s-different-port-20220921221118-10174             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m13s
	  kube-system                 kube-controller-manager-default-k8s-different-port-20220921221118-10174    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m13s
	  kube-system                 kube-proxy-bd9q4                                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m2s
	  kube-system                 kube-scheduler-default-k8s-different-port-20220921221118-10174             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m     kube-proxy       
	  Normal  Starting                 4m14s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m14s  kubelet          Node default-k8s-different-port-20220921221118-10174 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m14s  kubelet          Node default-k8s-different-port-20220921221118-10174 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m14s  kubelet          Node default-k8s-different-port-20220921221118-10174 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m14s  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m2s   node-controller  Node default-k8s-different-port-20220921221118-10174 event: Registered Node default-k8s-different-port-20220921221118-10174 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000005] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +2.959863] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.003881] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.023897] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[Sep21 22:10] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.005087] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[Sep21 22:11] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +2.967845] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.031851] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.027935] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +2.943864] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.019893] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.023889] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	
	* 
	* ==> etcd [b70eedeefc82fa3c6b066f602863caf1d1480e05a1a53e90d5e069ccbf264998] <==
	* {"level":"info","ts":"2022-09-21T22:28:35.199Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2022-09-21T22:28:35.202Z","caller":"etcdserver/server.go:736","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"9f0758e1c58a86ed","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2022-09-21T22:28:35.204Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-09-21T22:28:35.204Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-09-21T22:28:35.204Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-09-21T22:28:35.205Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2022-09-21T22:28:35.205Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2022-09-21T22:28:35.790Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 1"}
	{"level":"info","ts":"2022-09-21T22:28:35.790Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2022-09-21T22:28:35.790Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2022-09-21T22:28:35.790Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2022-09-21T22:28:35.790Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2022-09-21T22:28:35.790Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2022-09-21T22:28:35.790Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2022-09-21T22:28:35.790Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-21T22:28:35.792Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-21T22:28:35.792Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-21T22:28:35.792Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-21T22:28:35.792Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:default-k8s-different-port-20220921221118-10174 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2022-09-21T22:28:35.792Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-09-21T22:28:35.792Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-09-21T22:28:35.792Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-09-21T22:28:35.792Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-09-21T22:28:35.793Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-09-21T22:28:35.794Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.85.2:2379"}
	
	* 
	* ==> kernel <==
	*  22:32:55 up  1:15,  0 users,  load average: 0.23, 0.47, 1.12
	Linux default-k8s-different-port-20220921221118-10174 5.15.0-1017-gcp #23~20.04.2-Ubuntu SMP Wed Aug 17 02:46:40 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kube-apiserver [a2c10538d6c169ae5a43916a76b1c906600bc179a97452948028596b5d7b1e81] <==
	* I0921 22:28:53.860416       1 controller.go:616] quota admission added evaluator for: controllerrevisions.apps
	I0921 22:28:56.006490       1 alloc.go:327] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs=map[IPv4:10.100.92.205]
	I0921 22:28:56.309274       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.109.63.27]
	I0921 22:28:56.379348       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.96.20.237]
	W0921 22:28:56.876738       1 handler_proxy.go:105] no RequestInfo found in the context
	W0921 22:28:56.876747       1 handler_proxy.go:105] no RequestInfo found in the context
	E0921 22:28:56.876799       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0921 22:28:56.876808       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0921 22:28:56.876835       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0921 22:28:56.877969       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0921 22:29:56.877654       1 handler_proxy.go:105] no RequestInfo found in the context
	E0921 22:29:56.877701       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0921 22:29:56.877707       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0921 22:29:56.878719       1 handler_proxy.go:105] no RequestInfo found in the context
	E0921 22:29:56.878799       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0921 22:29:56.878812       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0921 22:31:56.878495       1 handler_proxy.go:105] no RequestInfo found in the context
	E0921 22:31:56.878548       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0921 22:31:56.878554       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0921 22:31:56.879650       1 handler_proxy.go:105] no RequestInfo found in the context
	E0921 22:31:56.879781       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0921 22:31:56.879801       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [be18d7989d5ccfb21d219bd0e3566a2f3bc927f64d4564f41e26660aced4961e] <==
	* I0921 22:28:56.194170       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7b94984548" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-7b94984548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0921 22:28:56.198454       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-54596f475f" failed with pods "kubernetes-dashboard-54596f475f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0921 22:28:56.198558       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-7b94984548" failed with pods "dashboard-metrics-scraper-7b94984548-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0921 22:28:56.198483       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-54596f475f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-54596f475f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0921 22:28:56.198728       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7b94984548" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-7b94984548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0921 22:28:56.203233       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-54596f475f" failed with pods "kubernetes-dashboard-54596f475f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0921 22:28:56.203236       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-54596f475f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-54596f475f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0921 22:28:56.280172       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7b94984548" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-7b94984548-x6wkq"
	I0921 22:28:56.283662       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-54596f475f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-54596f475f-z5fzh"
	E0921 22:29:23.419823       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0921 22:29:23.745058       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0921 22:29:53.426311       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0921 22:29:53.755893       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0921 22:30:23.432893       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0921 22:30:23.766353       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0921 22:30:53.440170       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0921 22:30:53.781448       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0921 22:31:23.446459       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0921 22:31:23.792884       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0921 22:31:53.452441       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0921 22:31:53.805383       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0921 22:32:23.458795       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0921 22:32:23.815299       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0921 22:32:53.466508       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0921 22:32:53.824741       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [118685bf1243c6ac8c16eec8c30f295521220e8fcf17757f6f81f9e1c5272837] <==
	* I0921 22:28:55.192383       1 node.go:163] Successfully retrieved node IP: 192.168.85.2
	I0921 22:28:55.192477       1 server_others.go:138] "Detected node IP" address="192.168.85.2"
	I0921 22:28:55.192532       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0921 22:28:55.390489       1 server_others.go:206] "Using iptables Proxier"
	I0921 22:28:55.390549       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0921 22:28:55.390563       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0921 22:28:55.390591       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0921 22:28:55.390618       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0921 22:28:55.390806       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0921 22:28:55.391060       1 server.go:661] "Version info" version="v1.25.2"
	I0921 22:28:55.391085       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0921 22:28:55.396059       1 config.go:444] "Starting node config controller"
	I0921 22:28:55.396110       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0921 22:28:55.396697       1 config.go:317] "Starting service config controller"
	I0921 22:28:55.396740       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0921 22:28:55.396775       1 config.go:226] "Starting endpoint slice config controller"
	I0921 22:28:55.396786       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0921 22:28:55.496508       1 shared_informer.go:262] Caches are synced for node config
	I0921 22:28:55.497685       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0921 22:28:55.497701       1 shared_informer.go:262] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [943813747ec7626d34543895ff4fd92fa5c805c9e2a573f3149ec44c228ea93f] <==
	* E0921 22:28:38.297790       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0921 22:28:38.297191       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0921 22:28:38.297821       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0921 22:28:38.297099       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0921 22:28:38.297851       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0921 22:28:38.297406       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0921 22:28:38.298122       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0921 22:28:38.298151       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0921 22:28:38.298521       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0921 22:28:38.298547       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0921 22:28:38.298869       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0921 22:28:38.298961       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0921 22:28:38.299251       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0921 22:28:38.299320       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0921 22:28:39.116527       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0921 22:28:39.116579       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0921 22:28:39.241049       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0921 22:28:39.241093       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0921 22:28:39.376827       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0921 22:28:39.376894       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0921 22:28:39.408191       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0921 22:28:39.408231       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0921 22:28:39.678024       1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0921 22:28:39.678066       1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0921 22:28:41.495874       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2022-09-21 22:24:02 UTC, end at Wed 2022-09-21 22:32:55 UTC. --
	Sep 21 22:30:56 default-k8s-different-port-20220921221118-10174 kubelet[3855]: E0921 22:30:56.925391    3855 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:31:01 default-k8s-different-port-20220921221118-10174 kubelet[3855]: E0921 22:31:01.926461    3855 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:31:06 default-k8s-different-port-20220921221118-10174 kubelet[3855]: E0921 22:31:06.927471    3855 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:31:11 default-k8s-different-port-20220921221118-10174 kubelet[3855]: E0921 22:31:11.929002    3855 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:31:16 default-k8s-different-port-20220921221118-10174 kubelet[3855]: E0921 22:31:16.930739    3855 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:31:21 default-k8s-different-port-20220921221118-10174 kubelet[3855]: E0921 22:31:21.932352    3855 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:31:26 default-k8s-different-port-20220921221118-10174 kubelet[3855]: E0921 22:31:26.933515    3855 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:31:31 default-k8s-different-port-20220921221118-10174 kubelet[3855]: E0921 22:31:31.934979    3855 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:31:36 default-k8s-different-port-20220921221118-10174 kubelet[3855]: I0921 22:31:36.184853    3855 scope.go:115] "RemoveContainer" containerID="3b59972efe126124836d07dd686baceb64dbbf348cc964a10b75be0d06e64c90"
	Sep 21 22:31:36 default-k8s-different-port-20220921221118-10174 kubelet[3855]: E0921 22:31:36.936764    3855 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:31:41 default-k8s-different-port-20220921221118-10174 kubelet[3855]: E0921 22:31:41.937776    3855 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:31:46 default-k8s-different-port-20220921221118-10174 kubelet[3855]: E0921 22:31:46.939476    3855 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:31:51 default-k8s-different-port-20220921221118-10174 kubelet[3855]: E0921 22:31:51.940691    3855 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:31:56 default-k8s-different-port-20220921221118-10174 kubelet[3855]: E0921 22:31:56.942231    3855 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:32:01 default-k8s-different-port-20220921221118-10174 kubelet[3855]: E0921 22:32:01.943613    3855 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:32:06 default-k8s-different-port-20220921221118-10174 kubelet[3855]: E0921 22:32:06.944982    3855 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:32:11 default-k8s-different-port-20220921221118-10174 kubelet[3855]: E0921 22:32:11.945807    3855 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:32:16 default-k8s-different-port-20220921221118-10174 kubelet[3855]: E0921 22:32:16.947345    3855 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:32:21 default-k8s-different-port-20220921221118-10174 kubelet[3855]: E0921 22:32:21.948799    3855 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:32:26 default-k8s-different-port-20220921221118-10174 kubelet[3855]: E0921 22:32:26.949733    3855 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:32:31 default-k8s-different-port-20220921221118-10174 kubelet[3855]: E0921 22:32:31.951414    3855 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:32:36 default-k8s-different-port-20220921221118-10174 kubelet[3855]: E0921 22:32:36.953238    3855 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:32:41 default-k8s-different-port-20220921221118-10174 kubelet[3855]: E0921 22:32:41.954127    3855 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:32:46 default-k8s-different-port-20220921221118-10174 kubelet[3855]: E0921 22:32:46.954846    3855 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:32:51 default-k8s-different-port-20220921221118-10174 kubelet[3855]: E0921 22:32:51.955588    3855 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220921221118-10174 -n default-k8s-different-port-20220921221118-10174
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-different-port-20220921221118-10174 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: coredns-565d847f94-mrw5b metrics-server-5c8fd5cf8-5bk5h storage-provisioner dashboard-metrics-scraper-7b94984548-x6wkq kubernetes-dashboard-54596f475f-z5fzh
helpers_test.go:272: ======> post-mortem[TestStartStop/group/default-k8s-different-port/serial/SecondStart]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context default-k8s-different-port-20220921221118-10174 describe pod coredns-565d847f94-mrw5b metrics-server-5c8fd5cf8-5bk5h storage-provisioner dashboard-metrics-scraper-7b94984548-x6wkq kubernetes-dashboard-54596f475f-z5fzh
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20220921221118-10174 describe pod coredns-565d847f94-mrw5b metrics-server-5c8fd5cf8-5bk5h storage-provisioner dashboard-metrics-scraper-7b94984548-x6wkq kubernetes-dashboard-54596f475f-z5fzh: exit status 1 (62.204943ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-565d847f94-mrw5b" not found
	Error from server (NotFound): pods "metrics-server-5c8fd5cf8-5bk5h" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-7b94984548-x6wkq" not found
	Error from server (NotFound): pods "kubernetes-dashboard-54596f475f-z5fzh" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context default-k8s-different-port-20220921221118-10174 describe pod coredns-565d847f94-mrw5b metrics-server-5c8fd5cf8-5bk5h storage-provisioner dashboard-metrics-scraper-7b94984548-x6wkq kubernetes-dashboard-54596f475f-z5fzh: exit status 1
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/SecondStart (534.80s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (542.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-54596f475f-nbnhj" [89da0b35-5751-454f-8344-29734ac4e81f] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
E0921 22:26:38.505144   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/ingress-addon-legacy-20220921213512-10174/client.crt: no such file or directory
E0921 22:26:59.250220   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/bridge-20220921215523-10174/client.crt: no such file or directory
E0921 22:28:22.294940   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/bridge-20220921215523-10174/client.crt: no such file or directory
E0921 22:29:03.527849   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/functional-20220921213235-10174/client.crt: no such file or directory
E0921 22:29:20.481978   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/functional-20220921213235-10174/client.crt: no such file or directory
E0921 22:29:21.247676   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/old-k8s-version-20220921220722-10174/client.crt: no such file or directory
E0921 22:29:27.009411   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/auto-20220921215523-10174/client.crt: no such file or directory
E0921 22:29:41.651253   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/kindnet-20220921215523-10174/client.crt: no such file or directory
E0921 22:29:58.905572   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/enable-default-cni-20220921215523-10174/client.crt: no such file or directory
E0921 22:30:08.447684   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/addons-20220921212740-10174/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0921 22:34:41.650569   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/kindnet-20220921215523-10174/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0921 22:34:58.904929   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/enable-default-cni-20220921215523-10174/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0921 22:35:08.448511   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/addons-20220921212740-10174/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: timed out waiting for the condition ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20220921220439-10174 -n embed-certs-20220921220439-10174
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2022-09-21 22:35:24.58685982 +0000 UTC m=+4092.725311830
start_stop_delete_test.go:274: (dbg) Run:  kubectl --context embed-certs-20220921220439-10174 describe po kubernetes-dashboard-54596f475f-nbnhj -n kubernetes-dashboard
start_stop_delete_test.go:274: (dbg) Non-zero exit: kubectl --context embed-certs-20220921220439-10174 describe po kubernetes-dashboard-54596f475f-nbnhj -n kubernetes-dashboard: context deadline exceeded (1.765µs)
start_stop_delete_test.go:274: kubectl --context embed-certs-20220921220439-10174 describe po kubernetes-dashboard-54596f475f-nbnhj -n kubernetes-dashboard: context deadline exceeded
start_stop_delete_test.go:274: (dbg) Run:  kubectl --context embed-certs-20220921220439-10174 logs kubernetes-dashboard-54596f475f-nbnhj -n kubernetes-dashboard
start_stop_delete_test.go:274: (dbg) Non-zero exit: kubectl --context embed-certs-20220921220439-10174 logs kubernetes-dashboard-54596f475f-nbnhj -n kubernetes-dashboard: context deadline exceeded (532ns)
start_stop_delete_test.go:274: kubectl --context embed-certs-20220921220439-10174 logs kubernetes-dashboard-54596f475f-nbnhj -n kubernetes-dashboard: context deadline exceeded
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: timed out waiting for the condition
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220921220439-10174
helpers_test.go:235: (dbg) docker inspect embed-certs-20220921220439-10174:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0efc3a0310481f73b6cd48b12157774b33f818f56057de5f30c303eafdd6c31a",
	        "Created": "2022-09-21T22:04:47.451918435Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 265957,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-09-21T22:17:28.927823098Z",
	            "FinishedAt": "2022-09-21T22:17:27.423983604Z"
	        },
	        "Image": "sha256:5f58fddaff4349397c9f51a6b73926a9b118af22b4ccb4492e84c74d0b59dcd4",
	        "ResolvConfPath": "/var/lib/docker/containers/0efc3a0310481f73b6cd48b12157774b33f818f56057de5f30c303eafdd6c31a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0efc3a0310481f73b6cd48b12157774b33f818f56057de5f30c303eafdd6c31a/hostname",
	        "HostsPath": "/var/lib/docker/containers/0efc3a0310481f73b6cd48b12157774b33f818f56057de5f30c303eafdd6c31a/hosts",
	        "LogPath": "/var/lib/docker/containers/0efc3a0310481f73b6cd48b12157774b33f818f56057de5f30c303eafdd6c31a/0efc3a0310481f73b6cd48b12157774b33f818f56057de5f30c303eafdd6c31a-json.log",
	        "Name": "/embed-certs-20220921220439-10174",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "embed-certs-20220921220439-10174:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-20220921220439-10174",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/4c8238508dd48885d3bcd481ab9754fae874a05a7cd3b8c92540515918045fea-init/diff:/var/lib/docker/overlay2/e464f9a636cb97832b8aa5685163afb26b14d037782877c3831e0f261b6fb12b/diff:/var/lib/docker/overlay2/6f2f230003f0711dfcff3390931e14f9abd39be1cd2b674079cb6a0fc99dab7f/diff:/var/lib/docker/overlay2/a539506727a1471a83b57694b6923a137699d9d47fe00601ffa27c34d63d17c2/diff:/var/lib/docker/overlay2/7dc48fda616494e9959c05191521a1f5bdf6be0555a35fc1d7ce85b87a0522ad/diff:/var/lib/docker/overlay2/6a1c06da85f8c03c9712693ca8a75d804e7b9f135a6b92e3929385cc791b0b3a/diff:/var/lib/docker/overlay2/d178ee6cf0c027c7943422dc71b38eb93ee131daede423f6cf283424eaebc517/diff:/var/lib/docker/overlay2/f4c8d81a89934a29db99d9ed4b4542a8882465ab2b6419ba4a3fe09214d25d9e/diff:/var/lib/docker/overlay2/4411575e37772974371f716b7c4abb3053138ddcaf53d883eb3116b4c3a37f78/diff:/var/lib/docker/overlay2/a3f8db0c0d790efcede2f41e470d293e280b022ff0bbb46c8bc25f190e3146fc/diff:/var/lib/docker/overlay2/a5bfd3
703378d6d5b2622598a13b4b2eb9203659c1ffff9aa93944ef8d20d22c/diff:/var/lib/docker/overlay2/3c24c9c8a37c1a488d12209df991b3857ccbc448a3329e46a8d48fc28ef81b83/diff:/var/lib/docker/overlay2/199771963b33bc809e912a6d428c556897f0ba04db2268e08695cb7ce0ee3bad/diff:/var/lib/docker/overlay2/4bc13570e456a5d261fbaf68140f2e5b2ea10c6683623858e16ee3ed1b117ef7/diff:/var/lib/docker/overlay2/9831fdfd44eff7e3f4780d10f5480f090b388735bb188e7a1e93251b7769d8f3/diff:/var/lib/docker/overlay2/28126fe31ddfbf99d4588c5ac750503b9b6cee8f2e00781941cff5f4323d1c76/diff:/var/lib/docker/overlay2/173221e8ff5f6ec22870429dc8bb01272ade638fe47275d1daa1ce07952c8c8c/diff:/var/lib/docker/overlay2/53bc2e4fa25c7c694765cf4d57e59552b26eed732ba59b8d74b221ea0ada7479/diff:/var/lib/docker/overlay2/0175a1cb7e1c92247b6cbac270da49d811d3508e193c1c2e47bef8cb769a47bf/diff:/var/lib/docker/overlay2/1651afeb98cf83af553f3e0a4dcb564af84cb440147702fddc0133cdae08e879/diff:/var/lib/docker/overlay2/64c0563be8f8902fe0e9d207f839be98da0e23d6d839403a7e5498f0468e6b75/diff:/var/lib/d
ocker/overlay2/c9ce1f7c22e681e0e279ce7656144b4c75e2d24f7297acad69e621f2756b75e8/diff:/var/lib/docker/overlay2/469e6b8bbc078383af3dc64c254906774ab56a833c4de1dd39b46bfa27fee3b0/diff:/var/lib/docker/overlay2/797536b923ada4261904843a51188063c401089eae5fec971f1dfb3c21d000a8/diff:/var/lib/docker/overlay2/9d5cbab97d75347795fc5814806362514e12ffc998b48411ecf1d23f980badf6/diff:/var/lib/docker/overlay2/8681f41e3df379cfdc8fd52b2e5837512b8e36a64c87fb159548a876d16e691d/diff:/var/lib/docker/overlay2/78ea1917a0aca968f65d796cbc2ee95d7d190a700496b9e47b29884fe13b1bec/diff:/var/lib/docker/overlay2/c3c231a74b7992f1fdc04b623c10d7791eab4fd97c803ed2610d595097c682c2/diff:/var/lib/docker/overlay2/f7b38a2a87d3958318f3a4b244e33d8096cd001a8fcb916eec022b0679382900/diff:/var/lib/docker/overlay2/1107be3397c1dfc460decaf307982d427cc431beb7938fb0e78f4f7cbc0afd3b/diff:/var/lib/docker/overlay2/59319d0c4cc381deff6e51ba1cc718b7cfd4fc557e74b8e83bd228afca501c8c/diff:/var/lib/docker/overlay2/9d031408a1ab9825246b7bba81db2596cb92e70fbedc70fba9be1d25320
8529f/diff:/var/lib/docker/overlay2/5800b717c814431baf3cc17520b099e7e011915cdeb46ed6360ec2f831a926ec/diff:/var/lib/docker/overlay2/6660d8dfc6dbb7f41f871ff7ec7e0fdfdb2ca3480141cdd4cce92869cb5382bf/diff:/var/lib/docker/overlay2/525a566b79110ff18e78880f2652b8d9cfcfbdbf3272fedb650933fcbbf869bd/diff:/var/lib/docker/overlay2/a0730aa8e0952fc661048d3a7f986ff246a5b2a29ad4e8195970bb407e3cecfc/diff:/var/lib/docker/overlay2/f9d25765ae914f5f013b5f46292f03c39d82f675d2102904f5fabecf07189337/diff:/var/lib/docker/overlay2/b7b34d126964ff60bc0fb625da7343e53e8bb4e77d5e72e33257c28b3d395ca3/diff:/var/lib/docker/overlay2/8514d6cc3775ebdd86f683a78503419bc97346c0059ac0d48563d2edec9727f0/diff:/var/lib/docker/overlay2/dd6f7bead5718d42040b4ee90ce09fa15720d04f3c58e2964b1d5b241bf5b7ed/diff:/var/lib/docker/overlay2/1a8adeb0355a42e9284e241b527930f4f6b7108d459c10550d5d0c4451f9e924/diff:/var/lib/docker/overlay2/703991449e0070d93945e4435da56144ce54e461e877e0b084144663d46b10f6/diff:/var/lib/docker/overlay2/65855711cfa445d374537e6268fd1bea2f62ea
bf2f590842933254b4755050dd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4c8238508dd48885d3bcd481ab9754fae874a05a7cd3b8c92540515918045fea/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4c8238508dd48885d3bcd481ab9754fae874a05a7cd3b8c92540515918045fea/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4c8238508dd48885d3bcd481ab9754fae874a05a7cd3b8c92540515918045fea/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-20220921220439-10174",
	                "Source": "/var/lib/docker/volumes/embed-certs-20220921220439-10174/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-20220921220439-10174",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-20220921220439-10174",
	                "name.minikube.sigs.k8s.io": "embed-certs-20220921220439-10174",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "bfb659b902e30decb66fbff7256dc4eff717f7e3540c5368b0dbaf96e0b6ac1c",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49428"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49427"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49424"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49426"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49425"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/bfb659b902e3",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-20220921220439-10174": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "0efc3a031048",
	                        "embed-certs-20220921220439-10174"
	                    ],
	                    "NetworkID": "e71aa30fd3ace87130e43e4abce1f2566d43d95c3b2e37ab1594e3c5a105c1bc",
	                    "EndpointID": "aaa77ea547f85d026152cafd14deb1d062a93066c3408701210f6a40b1b21fac",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220921220439-10174 -n embed-certs-20220921220439-10174
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-20220921220439-10174 logs -n 25
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| Command |                            Args                            |                     Profile                     |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| unpause | -p                                                         | old-k8s-version-20220921220722-10174            | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | old-k8s-version-20220921220722-10174                       |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | old-k8s-version-20220921220722-10174            | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | old-k8s-version-20220921220722-10174                       |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | old-k8s-version-20220921220722-10174            | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | old-k8s-version-20220921220722-10174                       |                                                 |         |         |                     |                     |
	| start   | -p newest-cni-20220921221720-10174 --memory=2200           | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.2                               |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | embed-certs-20220921220439-10174                | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | embed-certs-20220921220439-10174                           |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                     |                     |
	| stop    | -p                                                         | embed-certs-20220921220439-10174                | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | embed-certs-20220921220439-10174                           |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | embed-certs-20220921220439-10174                | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | embed-certs-20220921220439-10174                           |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                     |                     |
	| start   | -p                                                         | embed-certs-20220921220439-10174                | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC |                     |
	|         | embed-certs-20220921220439-10174                           |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                     |                     |
	|         | --wait=true --embed-certs                                  |                                                 |         |         |                     |                     |
	|         | --driver=docker                                            |                                                 |         |         |                     |                     |
	|         | --container-runtime=containerd                             |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.2                               |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | newest-cni-20220921221720-10174                            |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                     |                     |
	| stop    | -p                                                         | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | newest-cni-20220921221720-10174                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | newest-cni-20220921221720-10174                            |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                     |                     |
	| start   | -p newest-cni-20220921221720-10174 --memory=2200           | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:18 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.2                               |                                                 |         |         |                     |                     |
	| ssh     | -p                                                         | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:18 UTC | 21 Sep 22 22:18 UTC |
	|         | newest-cni-20220921221720-10174                            |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                     |                     |
	| pause   | -p                                                         | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:18 UTC | 21 Sep 22 22:18 UTC |
	|         | newest-cni-20220921221720-10174                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| unpause | -p                                                         | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:18 UTC | 21 Sep 22 22:18 UTC |
	|         | newest-cni-20220921221720-10174                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:18 UTC | 21 Sep 22 22:18 UTC |
	|         | newest-cni-20220921221720-10174                            |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:18 UTC | 21 Sep 22 22:18 UTC |
	|         | newest-cni-20220921221720-10174                            |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | no-preload-20220921220832-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:21 UTC | 21 Sep 22 22:21 UTC |
	|         | no-preload-20220921220832-10174                            |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                     |                     |
	| stop    | -p                                                         | no-preload-20220921220832-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:21 UTC | 21 Sep 22 22:21 UTC |
	|         | no-preload-20220921220832-10174                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | no-preload-20220921220832-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:21 UTC | 21 Sep 22 22:21 UTC |
	|         | no-preload-20220921220832-10174                            |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                     |                     |
	| start   | -p                                                         | no-preload-20220921220832-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:21 UTC |                     |
	|         | no-preload-20220921220832-10174                            |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                     |                     |
	|         | --wait=true --preload=false                                |                                                 |         |         |                     |                     |
	|         | --driver=docker                                            |                                                 |         |         |                     |                     |
	|         | --container-runtime=containerd                             |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.2                               |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | default-k8s-different-port-20220921221118-10174 | jenkins | v1.27.0 | 21 Sep 22 22:23 UTC | 21 Sep 22 22:23 UTC |
	|         | default-k8s-different-port-20220921221118-10174            |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                     |                     |
	| stop    | -p                                                         | default-k8s-different-port-20220921221118-10174 | jenkins | v1.27.0 | 21 Sep 22 22:23 UTC | 21 Sep 22 22:24 UTC |
	|         | default-k8s-different-port-20220921221118-10174            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | default-k8s-different-port-20220921221118-10174 | jenkins | v1.27.0 | 21 Sep 22 22:24 UTC | 21 Sep 22 22:24 UTC |
	|         | default-k8s-different-port-20220921221118-10174            |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                     |                     |
	| start   | -p                                                         | default-k8s-different-port-20220921221118-10174 | jenkins | v1.27.0 | 21 Sep 22 22:24 UTC |                     |
	|         | default-k8s-different-port-20220921221118-10174            |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true                |                                                 |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker                      |                                                 |         |         |                     |                     |
	|         |  --container-runtime=containerd                            |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.2                               |                                                 |         |         |                     |                     |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/09/21 22:24:01
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.19.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0921 22:24:01.692796  283599 out.go:296] Setting OutFile to fd 1 ...
	I0921 22:24:01.693211  283599 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:24:01.693232  283599 out.go:309] Setting ErrFile to fd 2...
	I0921 22:24:01.693240  283599 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:24:01.693504  283599 root.go:333] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/bin
	I0921 22:24:01.694665  283599 out.go:303] Setting JSON to false
	I0921 22:24:01.696140  283599 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3993,"bootTime":1663795049,"procs":467,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1017-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0921 22:24:01.696247  283599 start.go:125] virtualization: kvm guest
	I0921 22:24:01.698874  283599 out.go:177] * [default-k8s-different-port-20220921221118-10174] minikube v1.27.0 on Ubuntu 20.04 (kvm/amd64)
	I0921 22:24:01.701214  283599 out.go:177]   - MINIKUBE_LOCATION=14995
	I0921 22:24:01.701128  283599 notify.go:214] Checking for updates...
	I0921 22:24:01.703092  283599 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0921 22:24:01.704791  283599 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
	I0921 22:24:01.706544  283599 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube
	I0921 22:24:01.708172  283599 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0921 22:23:57.318050  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:23:59.318317  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:01.710349  283599 config.go:180] Loaded profile config "default-k8s-different-port-20220921221118-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.2
	I0921 22:24:01.710930  283599 driver.go:365] Setting default libvirt URI to qemu:///system
	I0921 22:24:01.744026  283599 docker.go:137] docker version: linux-20.10.18
	I0921 22:24:01.744136  283599 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:24:01.840732  283599 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:true NGoroutines:44 SystemTime:2022-09-21 22:24:01.764457724 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1017-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.18 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0921 22:24:01.840851  283599 docker.go:254] overlay module found
	I0921 22:24:01.843051  283599 out.go:177] * Using the docker driver based on existing profile
	I0921 22:24:01.844347  283599 start.go:284] selected driver: docker
	I0921 22:24:01.844371  283599 start.go:808] validating driver "docker" against &{Name:default-k8s-different-port-20220921221118-10174 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:default-k8s-different-port-20220921221118-10174 Na
mespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountSt
ring:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 22:24:01.844475  283599 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0921 22:24:01.845300  283599 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:24:01.940944  283599 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:true NGoroutines:44 SystemTime:2022-09-21 22:24:01.86716064 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1017-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.18 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientI
nfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0921 22:24:01.941199  283599 start_flags.go:867] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0921 22:24:01.941223  283599 cni.go:95] Creating CNI manager for ""
	I0921 22:24:01.941231  283599 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0921 22:24:01.941249  283599 start_flags.go:316] config:
	{Name:default-k8s-different-port-20220921221118-10174 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:default-k8s-different-port-20220921221118-10174 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDo
main:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 22:24:01.944240  283599 out.go:177] * Starting control plane node default-k8s-different-port-20220921221118-10174 in cluster default-k8s-different-port-20220921221118-10174
	I0921 22:24:01.945596  283599 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0921 22:24:01.946905  283599 out.go:177] * Pulling base image ...
	I0921 22:24:01.948255  283599 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime containerd
	I0921 22:24:01.948306  283599 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.2-containerd-overlay2-amd64.tar.lz4
	I0921 22:24:01.948321  283599 cache.go:57] Caching tarball of preloaded images
	I0921 22:24:01.948361  283599 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	I0921 22:24:01.948572  283599 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0921 22:24:01.948588  283599 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.2 on containerd
	I0921 22:24:01.948702  283599 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/config.json ...
	I0921 22:24:01.976413  283599 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon, skipping pull
	I0921 22:24:01.976445  283599 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c exists in daemon, skipping load
	I0921 22:24:01.976457  283599 cache.go:208] Successfully downloaded all kic artifacts
	I0921 22:24:01.976502  283599 start.go:364] acquiring machines lock for default-k8s-different-port-20220921221118-10174: {Name:mk6a2906d520bc1db61074ef435cf249d094e940 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:24:01.976622  283599 start.go:368] acquired machines lock for "default-k8s-different-port-20220921221118-10174" in 78.111µs
	I0921 22:24:01.976652  283599 start.go:96] Skipping create...Using existing machine configuration
	I0921 22:24:01.976660  283599 fix.go:55] fixHost starting: 
	I0921 22:24:01.976899  283599 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221118-10174 --format={{.State.Status}}
	I0921 22:24:02.002084  283599 fix.go:103] recreateIfNeeded on default-k8s-different-port-20220921221118-10174: state=Stopped err=<nil>
	W0921 22:24:02.002122  283599 fix.go:129] unexpected machine state, will restart: <nil>
	I0921 22:24:02.004632  283599 out.go:177] * Restarting existing docker container for "default-k8s-different-port-20220921221118-10174" ...
	I0921 22:24:00.289698  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:02.790230  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:02.006307  283599 cli_runner.go:164] Run: docker start default-k8s-different-port-20220921221118-10174
	I0921 22:24:02.358108  283599 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221118-10174 --format={{.State.Status}}
	I0921 22:24:02.385298  283599 kic.go:415] container "default-k8s-different-port-20220921221118-10174" state is running.
	I0921 22:24:02.385684  283599 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220921221118-10174
	I0921 22:24:02.412757  283599 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/config.json ...
	I0921 22:24:02.412997  283599 machine.go:88] provisioning docker machine ...
	I0921 22:24:02.413031  283599 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20220921221118-10174"
	I0921 22:24:02.413108  283599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:24:02.438229  283599 main.go:134] libmachine: Using SSH client type: native
	I0921 22:24:02.438400  283599 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ec520] 0x7ef6a0 <nil>  [] 0s} 127.0.0.1 49443 <nil> <nil>}
	I0921 22:24:02.438416  283599 main.go:134] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20220921221118-10174 && echo "default-k8s-different-port-20220921221118-10174" | sudo tee /etc/hostname
	I0921 22:24:02.439038  283599 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34230->127.0.0.1:49443: read: connection reset by peer
	I0921 22:24:05.584682  283599 main.go:134] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20220921221118-10174
	
	I0921 22:24:05.584766  283599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:24:05.608825  283599 main.go:134] libmachine: Using SSH client type: native
	I0921 22:24:05.609026  283599 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ec520] 0x7ef6a0 <nil>  [] 0s} 127.0.0.1 49443 <nil> <nil>}
	I0921 22:24:05.609059  283599 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20220921221118-10174' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20220921221118-10174/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20220921221118-10174' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0921 22:24:05.739656  283599 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0921 22:24:05.739694  283599 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube}
	I0921 22:24:05.739749  283599 ubuntu.go:177] setting up certificates
	I0921 22:24:05.739765  283599 provision.go:83] configureAuth start
	I0921 22:24:05.739824  283599 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220921221118-10174
	I0921 22:24:05.764789  283599 provision.go:138] copyHostCerts
	I0921 22:24:05.764839  283599 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.pem, removing ...
	I0921 22:24:05.764846  283599 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.pem
	I0921 22:24:05.764904  283599 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.pem (1078 bytes)
	I0921 22:24:05.764993  283599 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cert.pem, removing ...
	I0921 22:24:05.765005  283599 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cert.pem
	I0921 22:24:05.765028  283599 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cert.pem (1123 bytes)
	I0921 22:24:05.765086  283599 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/key.pem, removing ...
	I0921 22:24:05.765095  283599 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/key.pem
	I0921 22:24:05.765118  283599 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/key.pem (1675 bytes)
	I0921 22:24:05.765169  283599 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20220921221118-10174 san=[192.168.85.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20220921221118-10174]
	I0921 22:24:05.914466  283599 provision.go:172] copyRemoteCerts
	I0921 22:24:05.914534  283599 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0921 22:24:05.914564  283599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:24:05.939805  283599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49443 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa Username:docker}
	I0921 22:24:06.031315  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0921 22:24:06.048618  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server.pem --> /etc/docker/server.pem (1310 bytes)
	I0921 22:24:06.065530  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0921 22:24:06.083800  283599 provision.go:86] duration metric: configureAuth took 344.021748ms
	I0921 22:24:06.083828  283599 ubuntu.go:193] setting minikube options for container-runtime
	I0921 22:24:06.083988  283599 config.go:180] Loaded profile config "default-k8s-different-port-20220921221118-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.2
	I0921 22:24:06.083999  283599 machine.go:91] provisioned docker machine in 3.670987023s
	I0921 22:24:06.084006  283599 start.go:300] post-start starting for "default-k8s-different-port-20220921221118-10174" (driver="docker")
	I0921 22:24:06.084012  283599 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0921 22:24:06.084049  283599 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0921 22:24:06.084088  283599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:24:06.108286  283599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49443 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa Username:docker}
	I0921 22:24:06.203139  283599 ssh_runner.go:195] Run: cat /etc/os-release
	I0921 22:24:06.205811  283599 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0921 22:24:06.205839  283599 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0921 22:24:06.205852  283599 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0921 22:24:06.205864  283599 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0921 22:24:06.205880  283599 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/addons for local assets ...
	I0921 22:24:06.205944  283599 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files for local assets ...
	I0921 22:24:06.206037  283599 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/101742.pem -> 101742.pem in /etc/ssl/certs
	I0921 22:24:06.206142  283599 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0921 22:24:06.212569  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/101742.pem --> /etc/ssl/certs/101742.pem (1708 bytes)
	I0921 22:24:06.229418  283599 start.go:303] post-start completed in 145.398445ms
	I0921 22:24:06.229483  283599 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:24:06.229517  283599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:24:06.253305  283599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49443 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa Username:docker}
	I0921 22:24:06.340119  283599 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:24:06.344050  283599 fix.go:57] fixHost completed within 4.367385464s
	I0921 22:24:06.344071  283599 start.go:83] releasing machines lock for "default-k8s-different-port-20220921221118-10174", held for 4.367430848s
	I0921 22:24:06.344157  283599 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220921221118-10174
	I0921 22:24:06.368445  283599 ssh_runner.go:195] Run: systemctl --version
	I0921 22:24:06.368501  283599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:24:06.368505  283599 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0921 22:24:06.368550  283599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:24:06.394444  283599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49443 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa Username:docker}
	I0921 22:24:06.396066  283599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49443 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa Username:docker}
	I0921 22:24:06.513229  283599 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0921 22:24:06.524587  283599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0921 22:24:06.533746  283599 docker.go:188] disabling docker service ...
	I0921 22:24:06.533795  283599 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0921 22:24:06.543075  283599 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0921 22:24:06.551813  283599 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0921 22:24:06.629483  283599 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0921 22:24:01.818416  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:04.317912  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:05.288966  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:07.290168  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:06.707030  283599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0921 22:24:06.717244  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0921 22:24:06.729638  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "registry.k8s.io/pause:3.8"|' -i /etc/containerd/config.toml"
	I0921 22:24:06.737194  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0921 22:24:06.744928  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0921 22:24:06.752650  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0921 22:24:06.760419  283599 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0921 22:24:06.766584  283599 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0921 22:24:06.772903  283599 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0921 22:24:06.844578  283599 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0921 22:24:06.917291  283599 start.go:450] Will wait 60s for socket path /run/containerd/containerd.sock
	I0921 22:24:06.917353  283599 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0921 22:24:06.921118  283599 start.go:471] Will wait 60s for crictl version
	I0921 22:24:06.921184  283599 ssh_runner.go:195] Run: sudo crictl version
	I0921 22:24:06.948257  283599 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-09-21T22:24:06Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0921 22:24:06.817672  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:09.317278  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:11.317829  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:09.789185  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:12.289080  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:13.817410  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:15.817496  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:17.995620  283599 ssh_runner.go:195] Run: sudo crictl version
	I0921 22:24:18.018705  283599 start.go:480] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.8
	RuntimeApiVersion:  v1alpha2
	I0921 22:24:18.018768  283599 ssh_runner.go:195] Run: containerd --version
	I0921 22:24:18.047337  283599 ssh_runner.go:195] Run: containerd --version
	I0921 22:24:18.078051  283599 out.go:177] * Preparing Kubernetes v1.25.2 on containerd 1.6.8 ...
	I0921 22:24:14.289667  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:16.789199  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:18.079491  283599 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220921221118-10174 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 22:24:18.103308  283599 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0921 22:24:18.106553  283599 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0921 22:24:18.115993  283599 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime containerd
	I0921 22:24:18.116056  283599 ssh_runner.go:195] Run: sudo crictl images --output json
	I0921 22:24:18.139896  283599 containerd.go:553] all images are preloaded for containerd runtime.
	I0921 22:24:18.139921  283599 containerd.go:467] Images already preloaded, skipping extraction
	I0921 22:24:18.139964  283599 ssh_runner.go:195] Run: sudo crictl images --output json
	I0921 22:24:18.163323  283599 containerd.go:553] all images are preloaded for containerd runtime.
	I0921 22:24:18.163344  283599 cache_images.go:84] Images are preloaded, skipping loading
	I0921 22:24:18.163382  283599 ssh_runner.go:195] Run: sudo crictl info
	I0921 22:24:18.186911  283599 cni.go:95] Creating CNI manager for ""
	I0921 22:24:18.186935  283599 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0921 22:24:18.186948  283599 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0921 22:24:18.186961  283599 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.25.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20220921221118-10174 NodeName:default-k8s-different-port-20220921221118-10174 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgr
oupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
	I0921 22:24:18.187074  283599 kubeadm.go:161] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "default-k8s-different-port-20220921221118-10174"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0921 22:24:18.187152  283599 kubeadm.go:962] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=default-k8s-different-port-20220921221118-10174 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.2 ClusterName:default-k8s-different-port-20220921221118-10174 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0921 22:24:18.187196  283599 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.2
	I0921 22:24:18.194012  283599 binaries.go:44] Found k8s binaries, skipping transfer
	I0921 22:24:18.194081  283599 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0921 22:24:18.200606  283599 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (540 bytes)
	I0921 22:24:18.212899  283599 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0921 22:24:18.224754  283599 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2076 bytes)
	I0921 22:24:18.236775  283599 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0921 22:24:18.239439  283599 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0921 22:24:18.248263  283599 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174 for IP: 192.168.85.2
	I0921 22:24:18.248377  283599 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.key
	I0921 22:24:18.248421  283599 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/proxy-client-ca.key
	I0921 22:24:18.248485  283599 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/client.key
	I0921 22:24:18.248538  283599 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/apiserver.key.43b9df8c
	I0921 22:24:18.248575  283599 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/proxy-client.key
	I0921 22:24:18.248658  283599 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/10174.pem (1338 bytes)
	W0921 22:24:18.248689  283599 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/10174_empty.pem, impossibly tiny 0 bytes
	I0921 22:24:18.248705  283599 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca-key.pem (1679 bytes)
	I0921 22:24:18.248729  283599 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem (1078 bytes)
	I0921 22:24:18.248758  283599 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/cert.pem (1123 bytes)
	I0921 22:24:18.248780  283599 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/key.pem (1675 bytes)
	I0921 22:24:18.248846  283599 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/101742.pem (1708 bytes)
	I0921 22:24:18.249439  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0921 22:24:18.265894  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0921 22:24:18.282128  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0921 22:24:18.298690  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0921 22:24:18.315323  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0921 22:24:18.331842  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0921 22:24:18.348196  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0921 22:24:18.364368  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0921 22:24:18.380401  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0921 22:24:18.396696  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/10174.pem --> /usr/share/ca-certificates/10174.pem (1338 bytes)
	I0921 22:24:18.413238  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/101742.pem --> /usr/share/ca-certificates/101742.pem (1708 bytes)
	I0921 22:24:18.429482  283599 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0921 22:24:18.441654  283599 ssh_runner.go:195] Run: openssl version
	I0921 22:24:18.446184  283599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0921 22:24:18.453215  283599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0921 22:24:18.456119  283599 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Sep 21 21:28 /usr/share/ca-certificates/minikubeCA.pem
	I0921 22:24:18.456166  283599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0921 22:24:18.460690  283599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0921 22:24:18.467196  283599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10174.pem && ln -fs /usr/share/ca-certificates/10174.pem /etc/ssl/certs/10174.pem"
	I0921 22:24:18.474449  283599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10174.pem
	I0921 22:24:18.477401  283599 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Sep 21 21:32 /usr/share/ca-certificates/10174.pem
	I0921 22:24:18.477445  283599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10174.pem
	I0921 22:24:18.481956  283599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10174.pem /etc/ssl/certs/51391683.0"
	I0921 22:24:18.488418  283599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/101742.pem && ln -fs /usr/share/ca-certificates/101742.pem /etc/ssl/certs/101742.pem"
	I0921 22:24:18.495604  283599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/101742.pem
	I0921 22:24:18.498556  283599 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Sep 21 21:32 /usr/share/ca-certificates/101742.pem
	I0921 22:24:18.498600  283599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/101742.pem
	I0921 22:24:18.503245  283599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/101742.pem /etc/ssl/certs/3ec20f2e.0"
	I0921 22:24:18.509856  283599 kubeadm.go:396] StartCluster: {Name:default-k8s-different-port-20220921221118-10174 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:default-k8s-different-port-20220921221118-10174 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/
minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 22:24:18.509953  283599 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0921 22:24:18.509985  283599 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0921 22:24:18.533346  283599 cri.go:87] found id: "1058c41aafbb87ce5fc70eca7e064de03a01eb48e2da027b4515730997b03de5"
	I0921 22:24:18.533375  283599 cri.go:87] found id: "e1b3d54125fe2aff15e0a996ae0be62597db4065ff9a9d41a3b0e598b97b1608"
	I0921 22:24:18.533382  283599 cri.go:87] found id: "2654f64b12dee801006bf0c4b82dc6839422d58571d4bc74cee43fe7fdfe9b01"
	I0921 22:24:18.533388  283599 cri.go:87] found id: "1bb50adc50b7e339b9e7fee763126f6f17a621dcc78637bab78187b50e68bbf2"
	I0921 22:24:18.533393  283599 cri.go:87] found id: "9e931b83ea689a63f7a3861c1e0f7076f722e68890221c6209b05b3b852c12d7"
	I0921 22:24:18.533402  283599 cri.go:87] found id: "8a3d458869b0951d2fe3dd93e9d05a549b058f95f84b2ad0685292ebb3428767"
	I0921 22:24:18.533407  283599 cri.go:87] found id: ""
	I0921 22:24:18.533444  283599 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0921 22:24:18.545553  283599 cri.go:114] JSON = null
	W0921 22:24:18.545605  283599 kubeadm.go:403] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I0921 22:24:18.545686  283599 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0921 22:24:18.552635  283599 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I0921 22:24:18.552664  283599 kubeadm.go:627] restartCluster start
	I0921 22:24:18.552705  283599 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0921 22:24:18.558944  283599 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:18.559817  283599 kubeconfig.go:116] verify returned: extract IP: "default-k8s-different-port-20220921221118-10174" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
	I0921 22:24:18.560296  283599 kubeconfig.go:127] "default-k8s-different-port-20220921221118-10174" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig - will repair!
	I0921 22:24:18.561146  283599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig: {Name:mkdec5faf1bb182b50a3cd5458b352f38075dc15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:24:18.562655  283599 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0921 22:24:18.568841  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:18.568884  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:18.576584  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:18.776932  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:18.777023  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:18.786228  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:18.977461  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:18.977542  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:18.986186  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:19.177398  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:19.177487  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:19.186159  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:19.377453  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:19.377534  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:19.385921  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:19.577206  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:19.577296  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:19.586370  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:19.777572  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:19.777676  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:19.786797  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:19.977103  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:19.977188  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:19.985822  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:20.177132  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:20.177234  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:20.185876  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:20.377187  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:20.377298  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:20.386086  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:20.577399  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:20.577488  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:20.586142  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:20.777447  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:20.777527  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:20.786547  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:20.976769  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:20.976865  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:20.985682  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:21.176870  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:21.176951  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:21.185811  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:21.377116  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:21.377184  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:21.385829  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:21.577109  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:21.577202  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:21.585911  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:21.585933  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:21.585979  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:21.593866  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:21.593893  283599 kubeadm.go:602] needs reconfigure: apiserver error: timed out waiting for the condition
	I0921 22:24:21.593899  283599 kubeadm.go:1114] stopping kube-system containers ...
	I0921 22:24:21.593908  283599 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0921 22:24:21.593964  283599 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0921 22:24:21.618017  283599 cri.go:87] found id: "1058c41aafbb87ce5fc70eca7e064de03a01eb48e2da027b4515730997b03de5"
	I0921 22:24:21.618041  283599 cri.go:87] found id: "e1b3d54125fe2aff15e0a996ae0be62597db4065ff9a9d41a3b0e598b97b1608"
	I0921 22:24:21.618048  283599 cri.go:87] found id: "2654f64b12dee801006bf0c4b82dc6839422d58571d4bc74cee43fe7fdfe9b01"
	I0921 22:24:21.618058  283599 cri.go:87] found id: "1bb50adc50b7e339b9e7fee763126f6f17a621dcc78637bab78187b50e68bbf2"
	I0921 22:24:21.618064  283599 cri.go:87] found id: "9e931b83ea689a63f7a3861c1e0f7076f722e68890221c6209b05b3b852c12d7"
	I0921 22:24:21.618072  283599 cri.go:87] found id: "8a3d458869b0951d2fe3dd93e9d05a549b058f95f84b2ad0685292ebb3428767"
	I0921 22:24:21.618078  283599 cri.go:87] found id: ""
	I0921 22:24:21.618082  283599 cri.go:232] Stopping containers: [1058c41aafbb87ce5fc70eca7e064de03a01eb48e2da027b4515730997b03de5 e1b3d54125fe2aff15e0a996ae0be62597db4065ff9a9d41a3b0e598b97b1608 2654f64b12dee801006bf0c4b82dc6839422d58571d4bc74cee43fe7fdfe9b01 1bb50adc50b7e339b9e7fee763126f6f17a621dcc78637bab78187b50e68bbf2 9e931b83ea689a63f7a3861c1e0f7076f722e68890221c6209b05b3b852c12d7 8a3d458869b0951d2fe3dd93e9d05a549b058f95f84b2ad0685292ebb3428767]
	I0921 22:24:21.618118  283599 ssh_runner.go:195] Run: which crictl
	I0921 22:24:21.621347  283599 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop 1058c41aafbb87ce5fc70eca7e064de03a01eb48e2da027b4515730997b03de5 e1b3d54125fe2aff15e0a996ae0be62597db4065ff9a9d41a3b0e598b97b1608 2654f64b12dee801006bf0c4b82dc6839422d58571d4bc74cee43fe7fdfe9b01 1bb50adc50b7e339b9e7fee763126f6f17a621dcc78637bab78187b50e68bbf2 9e931b83ea689a63f7a3861c1e0f7076f722e68890221c6209b05b3b852c12d7 8a3d458869b0951d2fe3dd93e9d05a549b058f95f84b2ad0685292ebb3428767
	I0921 22:24:21.645531  283599 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0921 22:24:21.655622  283599 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0921 22:24:21.662408  283599 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Sep 21 22:11 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Sep 21 22:11 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2127 Sep 21 22:11 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Sep 21 22:11 /etc/kubernetes/scheduler.conf
	
	I0921 22:24:21.662459  283599 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0921 22:24:21.669029  283599 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0921 22:24:21.675699  283599 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0921 22:24:21.682316  283599 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:21.682358  283599 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0921 22:24:21.688501  283599 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0921 22:24:17.817867  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:19.818111  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:18.789856  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:21.289803  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:21.694928  283599 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:21.696684  283599 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0921 22:24:21.703329  283599 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0921 22:24:21.710109  283599 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0921 22:24:21.710132  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0921 22:24:21.757457  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0921 22:24:22.810948  283599 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.053458682s)
	I0921 22:24:22.810976  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0921 22:24:22.943243  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0921 22:24:22.995873  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0921 22:24:23.097694  283599 api_server.go:51] waiting for apiserver process to appear ...
	I0921 22:24:23.097766  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 22:24:23.608210  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 22:24:24.107567  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 22:24:24.187217  283599 api_server.go:71] duration metric: took 1.089523123s to wait for apiserver process to appear ...
	I0921 22:24:24.187296  283599 api_server.go:87] waiting for apiserver healthz status ...
	I0921 22:24:24.187323  283599 api_server.go:240] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I0921 22:24:24.187688  283599 api_server.go:256] stopped: https://192.168.85.2:8444/healthz: Get "https://192.168.85.2:8444/healthz": dial tcp 192.168.85.2:8444: connect: connection refused
	I0921 22:24:24.688449  283599 api_server.go:240] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I0921 22:24:22.317667  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:24.317872  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:23.789425  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:25.789684  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:27.790412  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:27.592182  283599 api_server.go:266] https://192.168.85.2:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0921 22:24:27.592315  283599 api_server.go:102] status: https://192.168.85.2:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0921 22:24:27.688579  283599 api_server.go:240] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I0921 22:24:27.694601  283599 api_server.go:266] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0921 22:24:27.694667  283599 api_server.go:102] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0921 22:24:28.187832  283599 api_server.go:240] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I0921 22:24:28.192979  283599 api_server.go:266] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0921 22:24:28.193004  283599 api_server.go:102] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0921 22:24:28.688623  283599 api_server.go:240] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I0921 22:24:28.695172  283599 api_server.go:266] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0921 22:24:28.695285  283599 api_server.go:102] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0921 22:24:29.187841  283599 api_server.go:240] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I0921 22:24:29.193157  283599 api_server.go:266] https://192.168.85.2:8444/healthz returned 200:
	ok
	I0921 22:24:29.198775  283599 api_server.go:140] control plane version: v1.25.2
	I0921 22:24:29.198796  283599 api_server.go:130] duration metric: took 5.011488882s to wait for apiserver health ...
	I0921 22:24:29.198805  283599 cni.go:95] Creating CNI manager for ""
	I0921 22:24:29.198812  283599 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0921 22:24:29.201314  283599 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0921 22:24:29.202798  283599 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0921 22:24:29.206616  283599 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.2/kubectl ...
	I0921 22:24:29.206636  283599 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0921 22:24:29.221913  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0921 22:24:29.826767  283599 system_pods.go:43] waiting for kube-system pods to appear ...
	I0921 22:24:29.834488  283599 system_pods.go:59] 9 kube-system pods found
	I0921 22:24:29.834517  283599 system_pods.go:61] "coredns-565d847f94-mrkjn" [7f364c47-74ce-4271-aab1-67bba320c586] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0921 22:24:29.834528  283599 system_pods.go:61] "etcd-default-k8s-different-port-20220921221118-10174" [8f0f58a7-7eae-43db-840f-bde95464e94e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0921 22:24:29.834533  283599 system_pods.go:61] "kindnet-7wbpp" [3f16ae0b-2f66-4f1e-b234-74570472a7f8] Running
	I0921 22:24:29.834539  283599 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220921221118-10174" [3a935d6b-ca77-4bcb-ae19-0a2af77c12a1] Running
	I0921 22:24:29.834544  283599 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220921221118-10174" [d01ee91a-5587-48e9-a235-68a73d5fedef] Running
	I0921 22:24:29.834549  283599 system_pods.go:61] "kube-proxy-lzphc" [611dbd37-0771-41b2-b886-93f46d79f802] Running
	I0921 22:24:29.834554  283599 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220921221118-10174" [998713da-f133-43f7-9f11-c6110ad66c8d] Running
	I0921 22:24:29.834561  283599 system_pods.go:61] "metrics-server-5c8fd5cf8-sshzh" [5972fae5-09c2-4e2e-b609-ef85f72311e4] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0921 22:24:29.834572  283599 system_pods.go:61] "storage-provisioner" [ca16dea1-fb3d-4cc1-b449-2236aefcc627] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0921 22:24:29.834577  283599 system_pods.go:74] duration metric: took 7.786123ms to wait for pod list to return data ...
	I0921 22:24:29.834588  283599 node_conditions.go:102] verifying NodePressure condition ...
	I0921 22:24:29.837059  283599 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0921 22:24:29.837085  283599 node_conditions.go:123] node cpu capacity is 8
	I0921 22:24:29.837096  283599 node_conditions.go:105] duration metric: took 2.500371ms to run NodePressure ...
	I0921 22:24:29.837121  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0921 22:24:30.025715  283599 kubeadm.go:763] waiting for restarted kubelet to initialise ...
	I0921 22:24:30.029542  283599 kubeadm.go:778] kubelet initialised
	I0921 22:24:30.029565  283599 kubeadm.go:779] duration metric: took 3.826857ms waiting for restarted kubelet to initialise ...
	I0921 22:24:30.029572  283599 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0921 22:24:30.034316  283599 pod_ready.go:78] waiting up to 4m0s for pod "coredns-565d847f94-mrkjn" in "kube-system" namespace to be "Ready" ...
	I0921 22:24:26.817684  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:29.317793  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:31.318001  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:30.289213  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:32.789335  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:32.039865  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:34.040511  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:36.539322  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:33.817371  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:35.817456  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:34.789530  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:37.289284  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:38.539700  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:41.040333  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:37.817967  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:40.318244  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:39.789967  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:42.289726  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:43.539636  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:45.540134  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:42.817716  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:44.818139  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:44.789355  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:47.288847  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:48.040425  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:50.539475  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:47.317825  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:49.318211  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:49.289182  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:51.289938  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:52.539590  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:54.540310  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:51.817491  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:53.818080  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:55.818165  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:53.789719  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:56.289013  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:57.040311  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:59.539775  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:58.318151  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:00.318254  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:58.289251  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:00.789124  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:02.789910  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:02.040207  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:04.540336  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:02.817283  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:04.817911  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:05.290121  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:07.789553  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:07.039774  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:09.039928  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:11.040136  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:07.318317  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:09.817957  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:10.289528  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:12.789110  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:13.540022  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:16.040513  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:12.317490  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:14.818433  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:14.789413  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:16.789947  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:18.539457  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:21.040423  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:17.317880  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:19.817565  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:19.289330  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:21.789335  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:23.539701  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:26.039677  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:22.317640  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:24.318075  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:23.789488  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:25.789726  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:28.539400  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:30.540154  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:26.817737  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:28.818270  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:31.318310  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:28.289323  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:30.789442  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:32.789667  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:33.039502  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:35.039801  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:33.318392  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:35.818247  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:34.790488  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:37.288758  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:37.539221  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:39.539681  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:41.539999  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:38.317564  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:40.317641  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:39.289052  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:41.789424  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:44.040284  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:46.540320  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:42.818080  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:45.317732  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:44.289331  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:46.789866  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:49.039837  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:51.540123  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:47.817565  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:49.314620  276511 pod_ready.go:81] duration metric: took 4m0.002300536s waiting for pod "coredns-565d847f94-m8xgt" in "kube-system" namespace to be "Ready" ...
	E0921 22:25:49.314670  276511 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-565d847f94-m8xgt" in "kube-system" namespace to be "Ready" (will not retry!)
	I0921 22:25:49.314692  276511 pod_ready.go:38] duration metric: took 4m0.007078344s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0921 22:25:49.314717  276511 kubeadm.go:631] restartCluster took 4m10.710033944s
	W0921 22:25:49.314858  276511 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0921 22:25:49.314887  276511 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0921 22:25:49.289362  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:51.789574  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:54.040292  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:56.540637  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:52.154431  276511 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (2.839517184s)
	I0921 22:25:52.154487  276511 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0921 22:25:52.163969  276511 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0921 22:25:52.170969  276511 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0921 22:25:52.171027  276511 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0921 22:25:52.177996  276511 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0921 22:25:52.178063  276511 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0921 22:25:52.213969  276511 kubeadm.go:317] W0921 22:25:52.213140    3321 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0921 22:25:52.246713  276511 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1017-gcp\n", err: exit status 1
	I0921 22:25:52.310910  276511 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0921 22:25:54.288796  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:56.289801  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:26:01.184243  276511 kubeadm.go:317] [init] Using Kubernetes version: v1.25.2
	I0921 22:26:01.184314  276511 kubeadm.go:317] [preflight] Running pre-flight checks
	I0921 22:26:01.184416  276511 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I0921 22:26:01.184507  276511 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1017-gcp
	I0921 22:26:01.184592  276511 kubeadm.go:317] OS: Linux
	I0921 22:26:01.184673  276511 kubeadm.go:317] CGROUPS_CPU: enabled
	I0921 22:26:01.184737  276511 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I0921 22:26:01.184793  276511 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I0921 22:26:01.184856  276511 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I0921 22:26:01.184921  276511 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I0921 22:26:01.184985  276511 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I0921 22:26:01.185046  276511 kubeadm.go:317] CGROUPS_PIDS: enabled
	I0921 22:26:01.185099  276511 kubeadm.go:317] CGROUPS_HUGETLB: enabled
	I0921 22:26:01.185157  276511 kubeadm.go:317] CGROUPS_BLKIO: enabled
	I0921 22:26:01.185254  276511 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0921 22:26:01.185380  276511 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0921 22:26:01.185526  276511 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0921 22:26:01.185623  276511 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0921 22:26:01.187463  276511 out.go:204]   - Generating certificates and keys ...
	I0921 22:26:01.187540  276511 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0921 22:26:01.187594  276511 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0921 22:26:01.187659  276511 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0921 22:26:01.187785  276511 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I0921 22:26:01.187900  276511 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I0921 22:26:01.187958  276511 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I0921 22:26:01.188014  276511 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I0921 22:26:01.188086  276511 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I0921 22:26:01.188221  276511 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0921 22:26:01.188336  276511 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0921 22:26:01.188409  276511 kubeadm.go:317] [certs] Using the existing "sa" key
	I0921 22:26:01.188488  276511 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0921 22:26:01.188556  276511 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0921 22:26:01.188636  276511 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0921 22:26:01.188731  276511 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0921 22:26:01.188817  276511 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0921 22:26:01.188953  276511 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0921 22:26:01.189087  276511 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0921 22:26:01.189191  276511 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I0921 22:26:01.189310  276511 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0921 22:26:01.191284  276511 out.go:204]   - Booting up control plane ...
	I0921 22:26:01.191385  276511 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0921 22:26:01.191486  276511 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0921 22:26:01.191561  276511 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0921 22:26:01.191748  276511 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0921 22:26:01.191985  276511 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0921 22:26:01.192105  276511 kubeadm.go:317] [apiclient] All control plane components are healthy after 6.503275 seconds
	I0921 22:26:01.192289  276511 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0921 22:26:01.192460  276511 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0921 22:26:01.192545  276511 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs
	I0921 22:26:01.192839  276511 kubeadm.go:317] [mark-control-plane] Marking the node no-preload-20220921220832-10174 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0921 22:26:01.192906  276511 kubeadm.go:317] [bootstrap-token] Using token: 9ldpwz.b05pw96cyce3l1nr
	I0921 22:26:01.194593  276511 out.go:204]   - Configuring RBAC rules ...
	I0921 22:26:01.194724  276511 kubeadm.go:317] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0921 22:26:01.194852  276511 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0921 22:26:01.195058  276511 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0921 22:26:01.195234  276511 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0921 22:26:01.195387  276511 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0921 22:26:01.195500  276511 kubeadm.go:317] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0921 22:26:01.195644  276511 kubeadm.go:317] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0921 22:26:01.195703  276511 kubeadm.go:317] [addons] Applied essential addon: CoreDNS
	I0921 22:26:01.195765  276511 kubeadm.go:317] [addons] Applied essential addon: kube-proxy
	I0921 22:26:01.195777  276511 kubeadm.go:317] 
	I0921 22:26:01.195861  276511 kubeadm.go:317] Your Kubernetes control-plane has initialized successfully!
	I0921 22:26:01.195872  276511 kubeadm.go:317] 
	I0921 22:26:01.195980  276511 kubeadm.go:317] To start using your cluster, you need to run the following as a regular user:
	I0921 22:26:01.196004  276511 kubeadm.go:317] 
	I0921 22:26:01.196036  276511 kubeadm.go:317]   mkdir -p $HOME/.kube
	I0921 22:26:01.196117  276511 kubeadm.go:317]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0921 22:26:01.196194  276511 kubeadm.go:317]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0921 22:26:01.196207  276511 kubeadm.go:317] 
	I0921 22:26:01.196286  276511 kubeadm.go:317] Alternatively, if you are the root user, you can run:
	I0921 22:26:01.196303  276511 kubeadm.go:317] 
	I0921 22:26:01.196379  276511 kubeadm.go:317]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0921 22:26:01.196404  276511 kubeadm.go:317] 
	I0921 22:26:01.196485  276511 kubeadm.go:317] You should now deploy a pod network to the cluster.
	I0921 22:26:01.196595  276511 kubeadm.go:317] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0921 22:26:01.196694  276511 kubeadm.go:317]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0921 22:26:01.196706  276511 kubeadm.go:317] 
	I0921 22:26:01.196820  276511 kubeadm.go:317] You can now join any number of control-plane nodes by copying certificate authorities
	I0921 22:26:01.196920  276511 kubeadm.go:317] and service account keys on each node and then running the following as root:
	I0921 22:26:01.196931  276511 kubeadm.go:317] 
	I0921 22:26:01.197032  276511 kubeadm.go:317]   kubeadm join control-plane.minikube.internal:8443 --token 9ldpwz.b05pw96cyce3l1nr \
	I0921 22:26:01.197181  276511 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:b419e756666ce2e30bdac31039f21ad7242d26e2b9c03a55c5bc6fe8dcfddcf7 \
	I0921 22:26:01.197220  276511 kubeadm.go:317] 	--control-plane 
	I0921 22:26:01.197231  276511 kubeadm.go:317] 
	I0921 22:26:01.197362  276511 kubeadm.go:317] Then you can join any number of worker nodes by running the following on each as root:
	I0921 22:26:01.197381  276511 kubeadm.go:317] 
	I0921 22:26:01.197495  276511 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8443 --token 9ldpwz.b05pw96cyce3l1nr \
	I0921 22:26:01.197628  276511 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:b419e756666ce2e30bdac31039f21ad7242d26e2b9c03a55c5bc6fe8dcfddcf7 
	I0921 22:26:01.197660  276511 cni.go:95] Creating CNI manager for ""
	I0921 22:26:01.197674  276511 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0921 22:26:01.199797  276511 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0921 22:25:59.039749  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:01.040507  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:01.201405  276511 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0921 22:26:01.205181  276511 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.2/kubectl ...
	I0921 22:26:01.205199  276511 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0921 22:26:01.218971  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0921 22:25:58.789397  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:26:00.789911  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:26:03.540344  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:06.039881  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:02.006490  276511 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0921 22:26:02.006560  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:02.006575  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl label nodes minikube.k8s.io/version=v1.27.0 minikube.k8s.io/commit=937c68716dfaac5b5ffa3b6655158d5d3472b8c4 minikube.k8s.io/name=no-preload-20220921220832-10174 minikube.k8s.io/updated_at=2022_09_21T22_26_02_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:02.013858  276511 ops.go:34] apiserver oom_adj: -16
	I0921 22:26:02.099832  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:02.694112  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:03.194089  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:03.693535  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:04.193854  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:04.693713  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:05.194101  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:05.694288  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:06.193619  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:06.693501  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:03.289345  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:26:05.789183  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:26:08.040230  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:10.539463  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:07.193590  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:07.693901  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:08.194072  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:08.694197  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:09.193914  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:09.693488  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:10.194416  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:10.693496  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:11.194435  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:11.694097  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:08.289258  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:26:10.789536  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:26:12.790035  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:26:12.194461  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:12.694279  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:13.193818  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:13.693711  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:13.758985  276511 kubeadm.go:1067] duration metric: took 11.752476269s to wait for elevateKubeSystemPrivileges.
	I0921 22:26:13.759013  276511 kubeadm.go:398] StartCluster complete in 4m35.198807914s
	I0921 22:26:13.759030  276511 settings.go:142] acquiring lock: {Name:mk2e017b0c75e33bad2b3a546d00214b38a21694 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:26:13.759144  276511 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
	I0921 22:26:13.760661  276511 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig: {Name:mkdec5faf1bb182b50a3cd5458b352f38075dc15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:26:14.276964  276511 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "no-preload-20220921220832-10174" rescaled to 1
	I0921 22:26:14.277021  276511 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0921 22:26:14.277060  276511 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0921 22:26:14.279846  276511 out.go:177] * Verifying Kubernetes components...
	I0921 22:26:14.277154  276511 addons.go:412] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0921 22:26:14.277306  276511 config.go:180] Loaded profile config "no-preload-20220921220832-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.2
	I0921 22:26:14.281313  276511 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0921 22:26:14.281349  276511 addons.go:65] Setting default-storageclass=true in profile "no-preload-20220921220832-10174"
	I0921 22:26:14.281359  276511 addons.go:65] Setting storage-provisioner=true in profile "no-preload-20220921220832-10174"
	I0921 22:26:14.281373  276511 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-20220921220832-10174"
	I0921 22:26:14.281387  276511 addons.go:65] Setting metrics-server=true in profile "no-preload-20220921220832-10174"
	I0921 22:26:14.281397  276511 addons.go:65] Setting dashboard=true in profile "no-preload-20220921220832-10174"
	I0921 22:26:14.281436  276511 addons.go:153] Setting addon dashboard=true in "no-preload-20220921220832-10174"
	W0921 22:26:14.281450  276511 addons.go:162] addon dashboard should already be in state true
	I0921 22:26:14.281497  276511 host.go:66] Checking if "no-preload-20220921220832-10174" exists ...
	I0921 22:26:14.281400  276511 addons.go:153] Setting addon metrics-server=true in "no-preload-20220921220832-10174"
	W0921 22:26:14.281576  276511 addons.go:162] addon metrics-server should already be in state true
	I0921 22:26:14.281377  276511 addons.go:153] Setting addon storage-provisioner=true in "no-preload-20220921220832-10174"
	I0921 22:26:14.281640  276511 host.go:66] Checking if "no-preload-20220921220832-10174" exists ...
	W0921 22:26:14.281653  276511 addons.go:162] addon storage-provisioner should already be in state true
	I0921 22:26:14.281684  276511 host.go:66] Checking if "no-preload-20220921220832-10174" exists ...
	I0921 22:26:14.281727  276511 cli_runner.go:164] Run: docker container inspect no-preload-20220921220832-10174 --format={{.State.Status}}
	I0921 22:26:14.282004  276511 cli_runner.go:164] Run: docker container inspect no-preload-20220921220832-10174 --format={{.State.Status}}
	I0921 22:26:14.282138  276511 cli_runner.go:164] Run: docker container inspect no-preload-20220921220832-10174 --format={{.State.Status}}
	I0921 22:26:14.282139  276511 cli_runner.go:164] Run: docker container inspect no-preload-20220921220832-10174 --format={{.State.Status}}
	I0921 22:26:14.321366  276511 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0921 22:26:14.323218  276511 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0921 22:26:14.323243  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0921 22:26:14.323321  276511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220832-10174
	I0921 22:26:14.323433  276511 addons.go:153] Setting addon default-storageclass=true in "no-preload-20220921220832-10174"
	W0921 22:26:14.323452  276511 addons.go:162] addon default-storageclass should already be in state true
	I0921 22:26:14.323478  276511 host.go:66] Checking if "no-preload-20220921220832-10174" exists ...
	I0921 22:26:14.323995  276511 cli_runner.go:164] Run: docker container inspect no-preload-20220921220832-10174 --format={{.State.Status}}
	I0921 22:26:14.331074  276511 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.6.0
	I0921 22:26:14.333243  276511 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0921 22:26:14.335670  276511 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0921 22:26:14.335699  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0921 22:26:14.335828  276511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220832-10174
	I0921 22:26:14.338700  276511 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0921 22:26:12.540251  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:15.040305  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:14.339971  276511 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0921 22:26:14.339996  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0921 22:26:14.340067  276511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220832-10174
	I0921 22:26:14.357088  276511 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0921 22:26:14.357118  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0921 22:26:14.357179  276511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220832-10174
	I0921 22:26:14.363845  276511 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49438 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/no-preload-20220921220832-10174/id_rsa Username:docker}
	I0921 22:26:14.373248  276511 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49438 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/no-preload-20220921220832-10174/id_rsa Username:docker}
	I0921 22:26:14.374001  276511 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49438 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/no-preload-20220921220832-10174/id_rsa Username:docker}
	I0921 22:26:14.403584  276511 node_ready.go:35] waiting up to 6m0s for node "no-preload-20220921220832-10174" to be "Ready" ...
	I0921 22:26:14.403673  276511 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0921 22:26:14.403710  276511 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49438 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/no-preload-20220921220832-10174/id_rsa Username:docker}
	I0921 22:26:14.597706  276511 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0921 22:26:14.597740  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0921 22:26:14.598185  276511 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0921 22:26:14.598208  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0921 22:26:14.678717  276511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0921 22:26:14.691157  276511 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0921 22:26:14.691190  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0921 22:26:14.776824  276511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0921 22:26:14.780103  276511 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0921 22:26:14.780131  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0921 22:26:14.796772  276511 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0921 22:26:14.796802  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0921 22:26:14.877240  276511 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0921 22:26:14.877270  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0921 22:26:14.886529  276511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0921 22:26:14.982072  276511 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0921 22:26:14.982106  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4206 bytes)
	I0921 22:26:15.083042  276511 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0921 22:26:15.083073  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0921 22:26:15.185025  276511 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0921 22:26:15.185058  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0921 22:26:15.288358  276511 start.go:810] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS
	I0921 22:26:15.295798  276511 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0921 22:26:15.295830  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0921 22:26:15.390667  276511 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0921 22:26:15.390693  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0921 22:26:15.415462  276511 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0921 22:26:15.415496  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0921 22:26:15.492343  276511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0921 22:26:15.887638  276511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.208874575s)
	I0921 22:26:15.887703  276511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.110843194s)
	I0921 22:26:15.982100  276511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.095511944s)
	I0921 22:26:15.982142  276511 addons.go:383] Verifying addon metrics-server=true in "no-preload-20220921220832-10174"
	I0921 22:26:16.410487  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:16.706261  276511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.213866962s)
	I0921 22:26:16.708800  276511 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0921 22:26:16.709899  276511 addons.go:414] enableAddons completed in 2.432760887s
	I0921 22:26:15.290491  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:26:17.789818  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:26:17.539620  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:20.039549  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:18.911099  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:21.409684  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:20.289226  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:26:22.292776  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:26:22.292799  265259 node_ready.go:38] duration metric: took 4m0.017444735s waiting for node "embed-certs-20220921220439-10174" to be "Ready" ...
	I0921 22:26:22.294631  265259 out.go:177] 
	W0921 22:26:22.296115  265259 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0921 22:26:22.296143  265259 out.go:239] * 
	W0921 22:26:22.296927  265259 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0921 22:26:22.298511  265259 out.go:177] 
	I0921 22:26:22.539641  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:25.039622  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:23.410505  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:25.909606  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:27.539385  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:29.539878  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:31.540249  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:27.910578  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:30.410429  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:33.540339  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:35.541025  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:32.910296  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:34.911081  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:38.039663  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:40.539522  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:37.410360  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:39.410436  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:42.540000  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:45.040231  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:41.909862  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:43.910310  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:46.409644  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:47.540283  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:50.039510  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:48.410566  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:50.410732  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:52.039949  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:54.540144  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:52.910395  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:54.910495  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:57.039966  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:59.040209  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:01.539473  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:57.409907  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:59.410288  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:03.540044  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:06.040183  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:01.910153  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:04.409817  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:06.410562  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:08.040423  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:10.539873  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:08.910302  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:11.410571  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:13.039961  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:15.040246  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:13.909964  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:15.910369  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:17.539604  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:19.539765  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:18.410585  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:20.910125  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:22.040021  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:24.539835  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:26.540240  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:22.910441  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:25.410069  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:28.540555  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:31.039426  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:27.410438  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:29.410512  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:33.040327  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:35.040601  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:31.910290  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:34.409802  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:37.540256  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:40.039584  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:36.909982  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:39.409679  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:41.410245  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:42.539492  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:44.539613  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:46.540433  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:43.909863  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:45.910696  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:49.039750  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:51.040314  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:48.410147  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:50.410237  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:53.040407  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:55.540422  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:52.910535  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:55.410601  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:58.040486  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:28:00.540148  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:57.910322  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:59.910846  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:03.039402  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:28:05.040045  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:28:02.410370  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:04.410513  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:07.040112  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:28:09.539484  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:28:11.539916  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:28:06.910328  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:09.409926  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:11.410618  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:14.040357  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:28:16.040410  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:28:13.909830  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:15.910746  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:18.539390  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:28:20.539944  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:28:18.409773  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:20.410208  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:22.540064  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:28:25.039880  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:28:22.410702  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:24.909931  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:27.539325  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:28:29.540282  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:28:30.037464  283599 pod_ready.go:81] duration metric: took 4m0.003103432s waiting for pod "coredns-565d847f94-mrkjn" in "kube-system" namespace to be "Ready" ...
	E0921 22:28:30.037491  283599 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-565d847f94-mrkjn" in "kube-system" namespace to be "Ready" (will not retry!)
	I0921 22:28:30.037512  283599 pod_ready.go:38] duration metric: took 4m0.007931264s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0921 22:28:30.037542  283599 kubeadm.go:631] restartCluster took 4m11.484871611s
	W0921 22:28:30.037694  283599 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0921 22:28:30.037731  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0921 22:28:26.910183  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:28.910722  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:31.410255  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:32.836415  283599 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (2.798662315s)
	I0921 22:28:32.836470  283599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0921 22:28:32.846281  283599 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0921 22:28:32.853286  283599 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0921 22:28:32.853347  283599 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0921 22:28:32.860321  283599 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0921 22:28:32.860372  283599 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0921 22:28:32.899444  283599 kubeadm.go:317] [init] Using Kubernetes version: v1.25.2
	I0921 22:28:32.899530  283599 kubeadm.go:317] [preflight] Running pre-flight checks
	I0921 22:28:32.927597  283599 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I0921 22:28:32.927684  283599 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1017-gcp
	I0921 22:28:32.927762  283599 kubeadm.go:317] OS: Linux
	I0921 22:28:32.927817  283599 kubeadm.go:317] CGROUPS_CPU: enabled
	I0921 22:28:32.927857  283599 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I0921 22:28:32.927895  283599 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I0921 22:28:32.927957  283599 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I0921 22:28:32.928004  283599 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I0921 22:28:32.928045  283599 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I0921 22:28:32.928083  283599 kubeadm.go:317] CGROUPS_PIDS: enabled
	I0921 22:28:32.928121  283599 kubeadm.go:317] CGROUPS_HUGETLB: enabled
	I0921 22:28:32.928158  283599 kubeadm.go:317] CGROUPS_BLKIO: enabled
	I0921 22:28:32.994267  283599 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0921 22:28:32.994393  283599 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0921 22:28:32.994471  283599 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0921 22:28:33.113433  283599 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0921 22:28:33.118993  283599 out.go:204]   - Generating certificates and keys ...
	I0921 22:28:33.119145  283599 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0921 22:28:33.119247  283599 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0921 22:28:33.119310  283599 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0921 22:28:33.119362  283599 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I0921 22:28:33.119432  283599 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I0921 22:28:33.119501  283599 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I0921 22:28:33.119554  283599 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I0921 22:28:33.119605  283599 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I0921 22:28:33.119666  283599 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0921 22:28:33.119759  283599 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0921 22:28:33.119797  283599 kubeadm.go:317] [certs] Using the existing "sa" key
	I0921 22:28:33.119873  283599 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0921 22:28:33.240892  283599 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0921 22:28:33.319256  283599 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0921 22:28:33.514290  283599 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0921 22:28:33.579294  283599 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0921 22:28:33.591185  283599 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0921 22:28:33.591951  283599 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0921 22:28:33.592077  283599 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I0921 22:28:33.671909  283599 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0921 22:28:33.674209  283599 out.go:204]   - Booting up control plane ...
	I0921 22:28:33.674356  283599 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0921 22:28:33.674478  283599 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0921 22:28:33.675328  283599 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0921 22:28:33.677339  283599 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0921 22:28:33.679453  283599 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0921 22:28:33.410335  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:35.410708  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:40.182528  283599 kubeadm.go:317] [apiclient] All control plane components are healthy after 6.502979 seconds
	I0921 22:28:40.182719  283599 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0921 22:28:40.191775  283599 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0921 22:28:40.708308  283599 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs
	I0921 22:28:40.708506  283599 kubeadm.go:317] [mark-control-plane] Marking the node default-k8s-different-port-20220921221118-10174 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0921 22:28:41.216221  283599 kubeadm.go:317] [bootstrap-token] Using token: 7zktge.i7kw817sdpmpqput
	I0921 22:28:41.217917  283599 out.go:204]   - Configuring RBAC rules ...
	I0921 22:28:41.218062  283599 kubeadm.go:317] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0921 22:28:41.221048  283599 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0921 22:28:41.225663  283599 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0921 22:28:41.227873  283599 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0921 22:28:41.229840  283599 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0921 22:28:41.231693  283599 kubeadm.go:317] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0921 22:28:41.238509  283599 kubeadm.go:317] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0921 22:28:41.448788  283599 kubeadm.go:317] [addons] Applied essential addon: CoreDNS
	I0921 22:28:41.684596  283599 kubeadm.go:317] [addons] Applied essential addon: kube-proxy
	I0921 22:28:41.686021  283599 kubeadm.go:317] 
	I0921 22:28:41.686112  283599 kubeadm.go:317] Your Kubernetes control-plane has initialized successfully!
	I0921 22:28:41.686121  283599 kubeadm.go:317] 
	I0921 22:28:41.686213  283599 kubeadm.go:317] To start using your cluster, you need to run the following as a regular user:
	I0921 22:28:41.686221  283599 kubeadm.go:317] 
	I0921 22:28:41.686253  283599 kubeadm.go:317]   mkdir -p $HOME/.kube
	I0921 22:28:41.687200  283599 kubeadm.go:317]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0921 22:28:41.687275  283599 kubeadm.go:317]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0921 22:28:41.687282  283599 kubeadm.go:317] 
	I0921 22:28:41.687347  283599 kubeadm.go:317] Alternatively, if you are the root user, you can run:
	I0921 22:28:41.687361  283599 kubeadm.go:317] 
	I0921 22:28:41.687420  283599 kubeadm.go:317]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0921 22:28:41.687443  283599 kubeadm.go:317] 
	I0921 22:28:41.687516  283599 kubeadm.go:317] You should now deploy a pod network to the cluster.
	I0921 22:28:41.687626  283599 kubeadm.go:317] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0921 22:28:41.687754  283599 kubeadm.go:317]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0921 22:28:41.687768  283599 kubeadm.go:317] 
	I0921 22:28:41.687856  283599 kubeadm.go:317] You can now join any number of control-plane nodes by copying certificate authorities
	I0921 22:28:41.687945  283599 kubeadm.go:317] and service account keys on each node and then running the following as root:
	I0921 22:28:41.687952  283599 kubeadm.go:317] 
	I0921 22:28:41.688054  283599 kubeadm.go:317]   kubeadm join control-plane.minikube.internal:8444 --token 7zktge.i7kw817sdpmpqput \
	I0921 22:28:41.688176  283599 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:b419e756666ce2e30bdac31039f21ad7242d26e2b9c03a55c5bc6fe8dcfddcf7 \
	I0921 22:28:41.688202  283599 kubeadm.go:317] 	--control-plane 
	I0921 22:28:41.688207  283599 kubeadm.go:317] 
	I0921 22:28:41.688304  283599 kubeadm.go:317] Then you can join any number of worker nodes by running the following on each as root:
	I0921 22:28:41.688309  283599 kubeadm.go:317] 
	I0921 22:28:41.688403  283599 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8444 --token 7zktge.i7kw817sdpmpqput \
	I0921 22:28:41.688525  283599 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:b419e756666ce2e30bdac31039f21ad7242d26e2b9c03a55c5bc6fe8dcfddcf7 
	I0921 22:28:41.691473  283599 kubeadm.go:317] W0921 22:28:32.894416    3309 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0921 22:28:41.691806  283599 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1017-gcp\n", err: exit status 1
	I0921 22:28:41.691944  283599 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0921 22:28:41.691973  283599 cni.go:95] Creating CNI manager for ""
	I0921 22:28:41.691983  283599 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0921 22:28:41.694185  283599 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0921 22:28:37.910661  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:40.410644  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:41.695783  283599 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0921 22:28:41.699760  283599 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.2/kubectl ...
	I0921 22:28:41.699784  283599 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0921 22:28:41.776183  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0921 22:28:42.446104  283599 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0921 22:28:42.446180  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:42.446216  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl label nodes minikube.k8s.io/version=v1.27.0 minikube.k8s.io/commit=937c68716dfaac5b5ffa3b6655158d5d3472b8c4 minikube.k8s.io/name=default-k8s-different-port-20220921221118-10174 minikube.k8s.io/updated_at=2022_09_21T22_28_42_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:42.524814  283599 ops.go:34] apiserver oom_adj: -16
	I0921 22:28:42.524918  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:43.099884  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:43.600017  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:44.099303  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:44.599933  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:45.100173  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:45.599961  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:46.099843  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:46.599840  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:42.910093  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:44.910463  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:47.099465  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:47.599512  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:48.099998  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:48.599598  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:49.099840  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:49.599433  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:50.099931  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:50.599355  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:51.099363  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:51.599865  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:47.410019  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:49.410428  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:51.410461  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:52.099400  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:52.600056  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:53.100255  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:53.599772  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:53.668975  283599 kubeadm.go:1067] duration metric: took 11.222848116s to wait for elevateKubeSystemPrivileges.
	I0921 22:28:53.669016  283599 kubeadm.go:398] StartCluster complete in 4m35.159165946s
	I0921 22:28:53.669039  283599 settings.go:142] acquiring lock: {Name:mk2e017b0c75e33bad2b3a546d00214b38a21694 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:28:53.669157  283599 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
	I0921 22:28:53.670820  283599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig: {Name:mkdec5faf1bb182b50a3cd5458b352f38075dc15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:28:54.187769  283599 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20220921221118-10174" rescaled to 1
	I0921 22:28:54.187839  283599 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0921 22:28:54.187870  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0921 22:28:54.187894  283599 addons.go:412] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0921 22:28:54.190631  283599 out.go:177] * Verifying Kubernetes components...
	I0921 22:28:54.187957  283599 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-different-port-20220921221118-10174"
	I0921 22:28:54.187964  283599 addons.go:65] Setting dashboard=true in profile "default-k8s-different-port-20220921221118-10174"
	I0921 22:28:54.187970  283599 addons.go:65] Setting default-storageclass=true in profile "default-k8s-different-port-20220921221118-10174"
	I0921 22:28:54.188002  283599 addons.go:65] Setting metrics-server=true in profile "default-k8s-different-port-20220921221118-10174"
	I0921 22:28:54.188076  283599 config.go:180] Loaded profile config "default-k8s-different-port-20220921221118-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.2
	I0921 22:28:54.192035  283599 addons.go:153] Setting addon storage-provisioner=true in "default-k8s-different-port-20220921221118-10174"
	W0921 22:28:54.192079  283599 addons.go:162] addon storage-provisioner should already be in state true
	I0921 22:28:54.192091  283599 addons.go:153] Setting addon dashboard=true in "default-k8s-different-port-20220921221118-10174"
	W0921 22:28:54.192114  283599 addons.go:162] addon dashboard should already be in state true
	I0921 22:28:54.192162  283599 host.go:66] Checking if "default-k8s-different-port-20220921221118-10174" exists ...
	I0921 22:28:54.192210  283599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0921 22:28:54.192299  283599 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20220921221118-10174"
	I0921 22:28:54.192580  283599 addons.go:153] Setting addon metrics-server=true in "default-k8s-different-port-20220921221118-10174"
	W0921 22:28:54.192616  283599 addons.go:162] addon metrics-server should already be in state true
	I0921 22:28:54.192633  283599 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221118-10174 --format={{.State.Status}}
	I0921 22:28:54.192666  283599 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221118-10174 --format={{.State.Status}}
	I0921 22:28:54.192163  283599 host.go:66] Checking if "default-k8s-different-port-20220921221118-10174" exists ...
	I0921 22:28:54.192666  283599 host.go:66] Checking if "default-k8s-different-port-20220921221118-10174" exists ...
	I0921 22:28:54.193362  283599 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221118-10174 --format={{.State.Status}}
	I0921 22:28:54.193439  283599 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221118-10174 --format={{.State.Status}}
	I0921 22:28:54.234974  283599 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0921 22:28:54.236667  283599 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0921 22:28:54.236692  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0921 22:28:54.236745  283599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:28:54.240000  283599 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.6.0
	I0921 22:28:54.239390  283599 addons.go:153] Setting addon default-storageclass=true in "default-k8s-different-port-20220921221118-10174"
	I0921 22:28:54.241874  283599 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0921 22:28:54.244335  283599 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0921 22:28:54.244363  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	W0921 22:28:54.241874  283599 addons.go:162] addon default-storageclass should already be in state true
	I0921 22:28:54.244424  283599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:28:54.244454  283599 host.go:66] Checking if "default-k8s-different-port-20220921221118-10174" exists ...
	I0921 22:28:54.244956  283599 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221118-10174 --format={{.State.Status}}
	I0921 22:28:54.246658  283599 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0921 22:28:54.248082  283599 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0921 22:28:54.248109  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0921 22:28:54.248165  283599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:28:54.272909  283599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49443 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa Username:docker}
	I0921 22:28:54.273873  283599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49443 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa Username:docker}
	I0921 22:28:54.277163  283599 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0921 22:28:54.277186  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0921 22:28:54.277236  283599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:28:54.290041  283599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49443 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa Username:docker}
	I0921 22:28:54.318706  283599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49443 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa Username:docker}
	I0921 22:28:54.398932  283599 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20220921221118-10174" to be "Ready" ...
	I0921 22:28:54.399014  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0921 22:28:54.496523  283599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0921 22:28:54.498431  283599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0921 22:28:54.499591  283599 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0921 22:28:54.499650  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0921 22:28:54.501640  283599 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0921 22:28:54.501663  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0921 22:28:54.594519  283599 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0921 22:28:54.594561  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0921 22:28:54.596768  283599 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0921 22:28:54.596847  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0921 22:28:54.690036  283599 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0921 22:28:54.690071  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0921 22:28:54.700119  283599 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0921 22:28:54.700197  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0921 22:28:54.876320  283599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0921 22:28:54.883544  283599 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0921 22:28:54.883571  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4206 bytes)
	I0921 22:28:54.977006  283599 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0921 22:28:54.977040  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0921 22:28:55.079240  283599 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0921 22:28:55.079273  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0921 22:28:55.176309  283599 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0921 22:28:55.176344  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0921 22:28:55.276282  283599 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0921 22:28:55.276317  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0921 22:28:55.379016  283599 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0921 22:28:55.379044  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0921 22:28:55.386242  283599 start.go:810] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS
	I0921 22:28:55.399129  283599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0921 22:28:55.595061  283599 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.098450009s)
	I0921 22:28:55.786581  283599 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.288109437s)
	I0921 22:28:56.081753  283599 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.205376891s)
	I0921 22:28:56.081804  283599 addons.go:383] Verifying addon metrics-server=true in "default-k8s-different-port-20220921221118-10174"
	I0921 22:28:56.387178  283599 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0921 22:28:56.388690  283599 addons.go:414] enableAddons completed in 2.200797183s
	I0921 22:28:56.404853  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:28:53.909716  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:55.910611  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:58.405031  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:00.405582  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:28:58.409630  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:00.410447  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:02.905572  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:05.405473  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:02.910338  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:05.410066  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:07.904364  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:09.905589  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:07.910279  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:10.410127  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:12.405034  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:14.905741  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:12.910452  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:15.410553  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:17.404952  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:19.405175  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:21.405392  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:17.910479  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:20.410559  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:23.405620  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:25.905592  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:22.909898  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:24.910567  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:27.905775  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:30.405483  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:27.410039  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:29.410131  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:31.410291  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:32.904863  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:35.404709  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:33.910459  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:36.410445  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:37.905690  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:40.405229  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:38.910532  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:41.409671  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:42.905360  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:44.905907  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:43.410422  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:45.910511  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:47.404631  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:49.405402  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:48.409951  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:50.410363  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:51.904997  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:53.905667  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:56.405228  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:52.411261  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:54.910318  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:58.405705  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:00.905348  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:57.409683  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:59.410335  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:30:03.404779  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:05.404833  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:01.909994  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:30:03.910230  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:30:06.410036  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:30:07.405804  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:09.904912  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:08.909550  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:30:10.910475  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:30:13.409889  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:30:14.413229  276511 node_ready.go:38] duration metric: took 4m0.009606009s waiting for node "no-preload-20220921220832-10174" to be "Ready" ...
	I0921 22:30:14.416209  276511 out.go:177] 
	W0921 22:30:14.417896  276511 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0921 22:30:14.417916  276511 out.go:239] * 
	W0921 22:30:14.418711  276511 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0921 22:30:14.420798  276511 out.go:177] 
	I0921 22:30:11.905117  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:13.905422  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:15.906020  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:18.404644  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:20.404682  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:22.405540  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:24.905233  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:27.404679  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:29.904692  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:31.905266  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:34.405088  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:36.405476  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:38.904414  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:40.905386  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:43.404507  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:45.405356  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:47.904571  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:50.405311  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:52.904564  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:54.905119  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:57.405076  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:59.405121  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:01.904816  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:03.905408  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:05.905565  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:08.404718  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:10.405173  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:12.905041  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:14.905498  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:17.405656  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:19.905667  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:22.405514  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:24.904738  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:27.404689  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:29.405353  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:31.904926  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:34.405471  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:36.905606  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:39.404550  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:41.405513  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:43.905655  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:46.405308  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:48.405699  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:50.905270  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:53.405205  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:55.405540  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:57.905798  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:00.405370  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:02.405480  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:04.904649  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:06.905338  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:09.404845  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:11.405472  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:13.905469  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:16.405211  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:18.405365  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:20.904698  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:23.405458  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:25.905299  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:27.905466  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:29.905633  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:32.404583  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:34.404795  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:36.405323  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:38.405395  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:40.904581  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:42.905533  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:45.405100  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:47.405337  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:49.405417  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:51.905042  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:54.404654  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:54.406831  283599 node_ready.go:38] duration metric: took 4m0.00786279s waiting for node "default-k8s-different-port-20220921221118-10174" to be "Ready" ...
	I0921 22:32:54.409456  283599 out.go:177] 
	W0921 22:32:54.411031  283599 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0921 22:32:54.411055  283599 out.go:239] * 
	W0921 22:32:54.411890  283599 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0921 22:32:54.413449  283599 out.go:177] 
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	77848d3ed192b       d921cee849482       49 seconds ago      Running             kindnet-cni               4                   7b2148af52ea2
	753ae8b3726e2       d921cee849482       4 minutes ago       Exited              kindnet-cni               3                   7b2148af52ea2
	db2b32bf71cfd       1c7d8c51823b5       13 minutes ago      Running             kube-proxy                0                   56f73a44a0f43
	d9d6f00f601ad       97801f8394908       13 minutes ago      Running             kube-apiserver            2                   4de074ddb1303
	0e6e061bef128       ca0ea1ee3cfd3       13 minutes ago      Running             kube-scheduler            2                   9bf7c4d13f7cc
	e61defb21aca6       dbfceb93c69b6       13 minutes ago      Running             kube-controller-manager   2                   1aa2186d6444e
	cb8e747da8911       a8a176a5d5d69       13 minutes ago      Running             etcd                      2                   f0e82af2a9d13
	
	* 
	* ==> containerd <==
	* -- Logs begin at Wed 2022-09-21 22:17:29 UTC, end at Wed 2022-09-21 22:35:25 UTC. --
	Sep 21 22:27:45 embed-certs-20220921220439-10174 containerd[386]: time="2022-09-21T22:27:45.375792287Z" level=info msg="RemoveContainer for \"9fa339f8a17988ae47ba53e5a834118b5286058169e096284e7c50ac173f6bb0\" returns successfully"
	Sep 21 22:27:57 embed-certs-20220921220439-10174 containerd[386]: time="2022-09-21T22:27:57.683998336Z" level=info msg="CreateContainer within sandbox \"7b2148af52ea2517b63fc5e58407ab436d5b350d2305fb41d4aedbe50d2cf11e\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:2,}"
	Sep 21 22:27:57 embed-certs-20220921220439-10174 containerd[386]: time="2022-09-21T22:27:57.697730853Z" level=info msg="CreateContainer within sandbox \"7b2148af52ea2517b63fc5e58407ab436d5b350d2305fb41d4aedbe50d2cf11e\" for &ContainerMetadata{Name:kindnet-cni,Attempt:2,} returns container id \"9f5ca7fb88f544e4aac376ffc5a4363209a1124afe996a03bad6b9e351019fc7\""
	Sep 21 22:27:57 embed-certs-20220921220439-10174 containerd[386]: time="2022-09-21T22:27:57.698303121Z" level=info msg="StartContainer for \"9f5ca7fb88f544e4aac376ffc5a4363209a1124afe996a03bad6b9e351019fc7\""
	Sep 21 22:27:57 embed-certs-20220921220439-10174 containerd[386]: time="2022-09-21T22:27:57.781261168Z" level=info msg="StartContainer for \"9f5ca7fb88f544e4aac376ffc5a4363209a1124afe996a03bad6b9e351019fc7\" returns successfully"
	Sep 21 22:30:38 embed-certs-20220921220439-10174 containerd[386]: time="2022-09-21T22:30:38.222429824Z" level=info msg="shim disconnected" id=9f5ca7fb88f544e4aac376ffc5a4363209a1124afe996a03bad6b9e351019fc7
	Sep 21 22:30:38 embed-certs-20220921220439-10174 containerd[386]: time="2022-09-21T22:30:38.222491485Z" level=warning msg="cleaning up after shim disconnected" id=9f5ca7fb88f544e4aac376ffc5a4363209a1124afe996a03bad6b9e351019fc7 namespace=k8s.io
	Sep 21 22:30:38 embed-certs-20220921220439-10174 containerd[386]: time="2022-09-21T22:30:38.222508184Z" level=info msg="cleaning up dead shim"
	Sep 21 22:30:38 embed-certs-20220921220439-10174 containerd[386]: time="2022-09-21T22:30:38.232990757Z" level=warning msg="cleanup warnings time=\"2022-09-21T22:30:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5237 runtime=io.containerd.runc.v2\n"
	Sep 21 22:30:38 embed-certs-20220921220439-10174 containerd[386]: time="2022-09-21T22:30:38.686590089Z" level=info msg="RemoveContainer for \"6b599acb1664c2790e259fbd46aeea9d1c71d8d2658a062f2db94e88a20513ae\""
	Sep 21 22:30:38 embed-certs-20220921220439-10174 containerd[386]: time="2022-09-21T22:30:38.691795477Z" level=info msg="RemoveContainer for \"6b599acb1664c2790e259fbd46aeea9d1c71d8d2658a062f2db94e88a20513ae\" returns successfully"
	Sep 21 22:31:06 embed-certs-20220921220439-10174 containerd[386]: time="2022-09-21T22:31:06.680993746Z" level=info msg="CreateContainer within sandbox \"7b2148af52ea2517b63fc5e58407ab436d5b350d2305fb41d4aedbe50d2cf11e\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:3,}"
	Sep 21 22:31:06 embed-certs-20220921220439-10174 containerd[386]: time="2022-09-21T22:31:06.695056737Z" level=info msg="CreateContainer within sandbox \"7b2148af52ea2517b63fc5e58407ab436d5b350d2305fb41d4aedbe50d2cf11e\" for &ContainerMetadata{Name:kindnet-cni,Attempt:3,} returns container id \"753ae8b3726e2e22e571f200280403bf5de03be568717cdaee362de09226ca49\""
	Sep 21 22:31:06 embed-certs-20220921220439-10174 containerd[386]: time="2022-09-21T22:31:06.695921310Z" level=info msg="StartContainer for \"753ae8b3726e2e22e571f200280403bf5de03be568717cdaee362de09226ca49\""
	Sep 21 22:31:06 embed-certs-20220921220439-10174 containerd[386]: time="2022-09-21T22:31:06.880912897Z" level=info msg="StartContainer for \"753ae8b3726e2e22e571f200280403bf5de03be568717cdaee362de09226ca49\" returns successfully"
	Sep 21 22:33:47 embed-certs-20220921220439-10174 containerd[386]: time="2022-09-21T22:33:47.324960574Z" level=info msg="shim disconnected" id=753ae8b3726e2e22e571f200280403bf5de03be568717cdaee362de09226ca49
	Sep 21 22:33:47 embed-certs-20220921220439-10174 containerd[386]: time="2022-09-21T22:33:47.325015649Z" level=warning msg="cleaning up after shim disconnected" id=753ae8b3726e2e22e571f200280403bf5de03be568717cdaee362de09226ca49 namespace=k8s.io
	Sep 21 22:33:47 embed-certs-20220921220439-10174 containerd[386]: time="2022-09-21T22:33:47.325025822Z" level=info msg="cleaning up dead shim"
	Sep 21 22:33:47 embed-certs-20220921220439-10174 containerd[386]: time="2022-09-21T22:33:47.335369793Z" level=warning msg="cleanup warnings time=\"2022-09-21T22:33:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5355 runtime=io.containerd.runc.v2\n"
	Sep 21 22:33:48 embed-certs-20220921220439-10174 containerd[386]: time="2022-09-21T22:33:48.046545445Z" level=info msg="RemoveContainer for \"9f5ca7fb88f544e4aac376ffc5a4363209a1124afe996a03bad6b9e351019fc7\""
	Sep 21 22:33:48 embed-certs-20220921220439-10174 containerd[386]: time="2022-09-21T22:33:48.052075571Z" level=info msg="RemoveContainer for \"9f5ca7fb88f544e4aac376ffc5a4363209a1124afe996a03bad6b9e351019fc7\" returns successfully"
	Sep 21 22:34:35 embed-certs-20220921220439-10174 containerd[386]: time="2022-09-21T22:34:35.681058089Z" level=info msg="CreateContainer within sandbox \"7b2148af52ea2517b63fc5e58407ab436d5b350d2305fb41d4aedbe50d2cf11e\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:4,}"
	Sep 21 22:34:35 embed-certs-20220921220439-10174 containerd[386]: time="2022-09-21T22:34:35.694524911Z" level=info msg="CreateContainer within sandbox \"7b2148af52ea2517b63fc5e58407ab436d5b350d2305fb41d4aedbe50d2cf11e\" for &ContainerMetadata{Name:kindnet-cni,Attempt:4,} returns container id \"77848d3ed192b7bc63914c0aed49cbe417a5b9df515d437d3cf6cb870b13e94b\""
	Sep 21 22:34:35 embed-certs-20220921220439-10174 containerd[386]: time="2022-09-21T22:34:35.695093530Z" level=info msg="StartContainer for \"77848d3ed192b7bc63914c0aed49cbe417a5b9df515d437d3cf6cb870b13e94b\""
	Sep 21 22:34:35 embed-certs-20220921220439-10174 containerd[386]: time="2022-09-21T22:34:35.793863656Z" level=info msg="StartContainer for \"77848d3ed192b7bc63914c0aed49cbe417a5b9df515d437d3cf6cb870b13e94b\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-20220921220439-10174
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-20220921220439-10174
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=937c68716dfaac5b5ffa3b6655158d5d3472b8c4
	                    minikube.k8s.io/name=embed-certs-20220921220439-10174
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_09_21T22_22_09_0700
	                    minikube.k8s.io/version=v1.27.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 21 Sep 2022 22:22:05 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-20220921220439-10174
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 21 Sep 2022 22:35:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 21 Sep 2022 22:32:31 +0000   Wed, 21 Sep 2022 22:22:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 21 Sep 2022 22:32:31 +0000   Wed, 21 Sep 2022 22:22:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 21 Sep 2022 22:32:31 +0000   Wed, 21 Sep 2022 22:22:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 21 Sep 2022 22:32:31 +0000   Wed, 21 Sep 2022 22:22:02 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    embed-certs-20220921220439-10174
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871748Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871748Ki
	  pods:               110
	System Info:
	  Machine ID:                 40026c506a9948afaae3b0fda0a61c83
	  System UUID:                39299add-007b-4517-8e1f-4d420ff2375f
	  Boot ID:                    878f3e16-9143-42f3-b848-1a740c7f26bf
	  Kernel Version:             5.15.0-1017-gcp
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.6.8
	  Kubelet Version:            v1.25.2
	  Kube-Proxy Version:         v1.25.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                        ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-embed-certs-20220921220439-10174                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-ttwgn                                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      13m
	  kube-system                 kube-apiserver-embed-certs-20220921220439-10174             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-embed-certs-20220921220439-10174    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-rmkm2                                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-embed-certs-20220921220439-10174             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientMemory  13m (x4 over 13m)  kubelet          Node embed-certs-20220921220439-10174 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x4 over 13m)  kubelet          Node embed-certs-20220921220439-10174 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x4 over 13m)  kubelet          Node embed-certs-20220921220439-10174 status is now: NodeHasSufficientPID
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m                kubelet          Node embed-certs-20220921220439-10174 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m                kubelet          Node embed-certs-20220921220439-10174 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m                kubelet          Node embed-certs-20220921220439-10174 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node embed-certs-20220921220439-10174 event: Registered Node embed-certs-20220921220439-10174 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000005] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +2.959863] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.003881] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.023897] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[Sep21 22:10] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.005087] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[Sep21 22:11] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +2.967845] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.031851] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.027935] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +2.943864] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.019893] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.023889] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	
	* 
	* ==> etcd [cb8e747da8911d7b0690bd1e54febfd721e32f467db89180c1209c4921e49ee5] <==
	* {"level":"info","ts":"2022-09-21T22:22:02.391Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-09-21T22:22:02.391Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-09-21T22:22:02.391Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-09-21T22:22:02.391Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-09-21T22:22:02.391Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-09-21T22:22:02.979Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 1"}
	{"level":"info","ts":"2022-09-21T22:22:02.979Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-09-21T22:22:02.980Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 1"}
	{"level":"info","ts":"2022-09-21T22:22:02.980Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 2"}
	{"level":"info","ts":"2022-09-21T22:22:02.980Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2022-09-21T22:22:02.980Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 2"}
	{"level":"info","ts":"2022-09-21T22:22:02.980Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2022-09-21T22:22:02.980Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-21T22:22:02.981Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-21T22:22:02.981Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-21T22:22:02.981Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-21T22:22:02.981Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-09-21T22:22:02.981Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:embed-certs-20220921220439-10174 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2022-09-21T22:22:02.981Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-09-21T22:22:02.981Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-09-21T22:22:02.981Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-09-21T22:22:02.982Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-09-21T22:22:02.982Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2022-09-21T22:32:03.318Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":540}
	{"level":"info","ts":"2022-09-21T22:32:03.319Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":540,"took":"490.308µs"}
	
	* 
	* ==> kernel <==
	*  22:35:25 up  1:17,  0 users,  load average: 0.24, 0.36, 0.98
	Linux embed-certs-20220921220439-10174 5.15.0-1017-gcp #23~20.04.2-Ubuntu SMP Wed Aug 17 02:46:40 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kube-apiserver [d9d6f00f601ad90b6215ac35efe6ec71385b625769daced48a17a5f76c90cc37] <==
	* W0921 22:30:06.711971       1 handler_proxy.go:105] no RequestInfo found in the context
	E0921 22:30:06.712056       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0921 22:30:06.712413       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0921 22:32:06.714126       1 handler_proxy.go:105] no RequestInfo found in the context
	W0921 22:32:06.714142       1 handler_proxy.go:105] no RequestInfo found in the context
	E0921 22:32:06.714161       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0921 22:32:06.714168       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0921 22:32:06.714202       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0921 22:32:06.715323       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0921 22:33:06.715238       1 handler_proxy.go:105] no RequestInfo found in the context
	E0921 22:33:06.715277       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0921 22:33:06.715286       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0921 22:33:06.716402       1 handler_proxy.go:105] no RequestInfo found in the context
	E0921 22:33:06.716455       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0921 22:33:06.716463       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0921 22:35:06.715973       1 handler_proxy.go:105] no RequestInfo found in the context
	E0921 22:35:06.716015       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0921 22:35:06.716023       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0921 22:35:06.717142       1 handler_proxy.go:105] no RequestInfo found in the context
	E0921 22:35:06.717213       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0921 22:35:06.717232       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [e61defb21aca6380b947a10f6c1b57bbbad3be0a94605532918f46b10500a1e1] <==
	* W0921 22:29:21.829990       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0921 22:29:51.301864       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0921 22:29:51.839487       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0921 22:30:21.307604       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0921 22:30:21.849698       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0921 22:30:51.313140       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0921 22:30:51.859864       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0921 22:31:21.319969       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0921 22:31:21.872732       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0921 22:31:51.326263       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0921 22:31:51.883970       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0921 22:32:21.332379       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0921 22:32:21.895509       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0921 22:32:51.337772       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0921 22:32:51.910395       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0921 22:33:21.344414       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0921 22:33:21.920742       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0921 22:33:51.350473       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0921 22:33:51.936025       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0921 22:34:21.356281       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0921 22:34:21.948821       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0921 22:34:51.363050       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0921 22:34:51.961654       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0921 22:35:21.369696       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0921 22:35:21.972299       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [db2b32bf71cfdfa827a9c9802b4d48659cbe2dbab4ee43a889088c3da006fd52] <==
	* I0921 22:22:22.120381       1 node.go:163] Successfully retrieved node IP: 192.168.67.2
	I0921 22:22:22.120449       1 server_others.go:138] "Detected node IP" address="192.168.67.2"
	I0921 22:22:22.120475       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0921 22:22:22.139843       1 server_others.go:206] "Using iptables Proxier"
	I0921 22:22:22.139879       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0921 22:22:22.139897       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0921 22:22:22.139916       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0921 22:22:22.139949       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0921 22:22:22.140085       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0921 22:22:22.140287       1 server.go:661] "Version info" version="v1.25.2"
	I0921 22:22:22.140312       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0921 22:22:22.140854       1 config.go:226] "Starting endpoint slice config controller"
	I0921 22:22:22.140865       1 config.go:317] "Starting service config controller"
	I0921 22:22:22.140875       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0921 22:22:22.140883       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0921 22:22:22.140923       1 config.go:444] "Starting node config controller"
	I0921 22:22:22.141091       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0921 22:22:22.241043       1 shared_informer.go:262] Caches are synced for service config
	I0921 22:22:22.241060       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0921 22:22:22.241179       1 shared_informer.go:262] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [0e6e061bef128b505ba44db28f6d8a49a4912fe2cd4fe925288aa43db0ff17fe] <==
	* W0921 22:22:05.798787       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0921 22:22:05.798970       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0921 22:22:05.799229       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0921 22:22:05.799254       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0921 22:22:05.799327       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0921 22:22:05.799345       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0921 22:22:05.799401       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0921 22:22:05.799421       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0921 22:22:05.799526       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0921 22:22:05.799545       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0921 22:22:05.799595       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0921 22:22:05.799617       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0921 22:22:06.630521       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0921 22:22:06.630561       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0921 22:22:06.631392       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0921 22:22:06.631423       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0921 22:22:06.746394       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0921 22:22:06.746460       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0921 22:22:06.788783       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0921 22:22:06.788828       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0921 22:22:06.796264       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0921 22:22:06.796302       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0921 22:22:06.808414       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0921 22:22:06.808454       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0921 22:22:07.395625       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2022-09-21 22:17:29 UTC, end at Wed 2022-09-21 22:35:25 UTC. --
	Sep 21 22:33:58 embed-certs-20220921220439-10174 kubelet[3842]: I0921 22:33:58.679112    3842 scope.go:115] "RemoveContainer" containerID="753ae8b3726e2e22e571f200280403bf5de03be568717cdaee362de09226ca49"
	Sep 21 22:33:58 embed-certs-20220921220439-10174 kubelet[3842]: E0921 22:33:58.679532    3842 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-ttwgn_kube-system(64a9192e-6081-4b66-8bc3-28f897591f26)\"" pod="kube-system/kindnet-ttwgn" podUID=64a9192e-6081-4b66-8bc3-28f897591f26
	Sep 21 22:33:59 embed-certs-20220921220439-10174 kubelet[3842]: E0921 22:33:59.052343    3842 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:34:04 embed-certs-20220921220439-10174 kubelet[3842]: E0921 22:34:04.053996    3842 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:34:09 embed-certs-20220921220439-10174 kubelet[3842]: E0921 22:34:09.055485    3842 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:34:09 embed-certs-20220921220439-10174 kubelet[3842]: I0921 22:34:09.678918    3842 scope.go:115] "RemoveContainer" containerID="753ae8b3726e2e22e571f200280403bf5de03be568717cdaee362de09226ca49"
	Sep 21 22:34:09 embed-certs-20220921220439-10174 kubelet[3842]: E0921 22:34:09.679246    3842 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-ttwgn_kube-system(64a9192e-6081-4b66-8bc3-28f897591f26)\"" pod="kube-system/kindnet-ttwgn" podUID=64a9192e-6081-4b66-8bc3-28f897591f26
	Sep 21 22:34:14 embed-certs-20220921220439-10174 kubelet[3842]: E0921 22:34:14.056582    3842 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:34:19 embed-certs-20220921220439-10174 kubelet[3842]: E0921 22:34:19.060413    3842 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:34:23 embed-certs-20220921220439-10174 kubelet[3842]: I0921 22:34:23.678795    3842 scope.go:115] "RemoveContainer" containerID="753ae8b3726e2e22e571f200280403bf5de03be568717cdaee362de09226ca49"
	Sep 21 22:34:23 embed-certs-20220921220439-10174 kubelet[3842]: E0921 22:34:23.679109    3842 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-ttwgn_kube-system(64a9192e-6081-4b66-8bc3-28f897591f26)\"" pod="kube-system/kindnet-ttwgn" podUID=64a9192e-6081-4b66-8bc3-28f897591f26
	Sep 21 22:34:24 embed-certs-20220921220439-10174 kubelet[3842]: E0921 22:34:24.061419    3842 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:34:29 embed-certs-20220921220439-10174 kubelet[3842]: E0921 22:34:29.063116    3842 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:34:34 embed-certs-20220921220439-10174 kubelet[3842]: E0921 22:34:34.064214    3842 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:34:35 embed-certs-20220921220439-10174 kubelet[3842]: I0921 22:34:35.678334    3842 scope.go:115] "RemoveContainer" containerID="753ae8b3726e2e22e571f200280403bf5de03be568717cdaee362de09226ca49"
	Sep 21 22:34:39 embed-certs-20220921220439-10174 kubelet[3842]: E0921 22:34:39.065078    3842 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:34:44 embed-certs-20220921220439-10174 kubelet[3842]: E0921 22:34:44.066002    3842 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:34:49 embed-certs-20220921220439-10174 kubelet[3842]: E0921 22:34:49.067345    3842 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:34:54 embed-certs-20220921220439-10174 kubelet[3842]: E0921 22:34:54.068372    3842 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:34:59 embed-certs-20220921220439-10174 kubelet[3842]: E0921 22:34:59.070027    3842 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:35:04 embed-certs-20220921220439-10174 kubelet[3842]: E0921 22:35:04.071231    3842 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:35:09 embed-certs-20220921220439-10174 kubelet[3842]: E0921 22:35:09.072887    3842 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:35:14 embed-certs-20220921220439-10174 kubelet[3842]: E0921 22:35:14.073758    3842 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:35:19 embed-certs-20220921220439-10174 kubelet[3842]: E0921 22:35:19.074742    3842 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:35:24 embed-certs-20220921220439-10174 kubelet[3842]: E0921 22:35:24.076455    3842 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20220921220439-10174 -n embed-certs-20220921220439-10174
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-20220921220439-10174 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: coredns-565d847f94-9lkvq metrics-server-5c8fd5cf8-mplqh storage-provisioner dashboard-metrics-scraper-7b94984548-xnlgm kubernetes-dashboard-54596f475f-nbnhj
helpers_test.go:272: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context embed-certs-20220921220439-10174 describe pod coredns-565d847f94-9lkvq metrics-server-5c8fd5cf8-mplqh storage-provisioner dashboard-metrics-scraper-7b94984548-xnlgm kubernetes-dashboard-54596f475f-nbnhj
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context embed-certs-20220921220439-10174 describe pod coredns-565d847f94-9lkvq metrics-server-5c8fd5cf8-mplqh storage-provisioner dashboard-metrics-scraper-7b94984548-xnlgm kubernetes-dashboard-54596f475f-nbnhj: exit status 1 (62.550664ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-565d847f94-9lkvq" not found
	Error from server (NotFound): pods "metrics-server-5c8fd5cf8-mplqh" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-7b94984548-xnlgm" not found
	Error from server (NotFound): pods "kubernetes-dashboard-54596f475f-nbnhj" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context embed-certs-20220921220439-10174 describe pod coredns-565d847f94-9lkvq metrics-server-5c8fd5cf8-mplqh storage-provisioner dashboard-metrics-scraper-7b94984548-xnlgm kubernetes-dashboard-54596f475f-nbnhj: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (542.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-54596f475f-gh8ld" [a055b009-ae2e-416a-b1a9-dc2c57ed6741] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
E0921 22:31:02.147349   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/cilium-20220921215524-10174/client.crt: no such file or directory
E0921 22:31:21.555261   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/ingress-addon-legacy-20220921213512-10174/client.crt: no such file or directory
E0921 22:31:21.949923   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/enable-default-cni-20220921215523-10174/client.crt: no such file or directory
E0921 22:31:38.505115   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/ingress-addon-legacy-20220921213512-10174/client.crt: no such file or directory
E0921 22:31:59.250146   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/bridge-20220921215523-10174/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0921 22:39:05.192921   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/cilium-20220921215524-10174/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: timed out waiting for the condition ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20220921220832-10174 -n no-preload-20220921220832-10174
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2022-09-21 22:39:16.83029186 +0000 UTC m=+4324.968743867
start_stop_delete_test.go:274: (dbg) Run:  kubectl --context no-preload-20220921220832-10174 describe po kubernetes-dashboard-54596f475f-gh8ld -n kubernetes-dashboard
start_stop_delete_test.go:274: (dbg) Non-zero exit: kubectl --context no-preload-20220921220832-10174 describe po kubernetes-dashboard-54596f475f-gh8ld -n kubernetes-dashboard: context deadline exceeded (1.548µs)
start_stop_delete_test.go:274: kubectl --context no-preload-20220921220832-10174 describe po kubernetes-dashboard-54596f475f-gh8ld -n kubernetes-dashboard: context deadline exceeded
start_stop_delete_test.go:274: (dbg) Run:  kubectl --context no-preload-20220921220832-10174 logs kubernetes-dashboard-54596f475f-gh8ld -n kubernetes-dashboard
start_stop_delete_test.go:274: (dbg) Non-zero exit: kubectl --context no-preload-20220921220832-10174 logs kubernetes-dashboard-54596f475f-gh8ld -n kubernetes-dashboard: context deadline exceeded (160ns)
start_stop_delete_test.go:274: kubectl --context no-preload-20220921220832-10174 logs kubernetes-dashboard-54596f475f-gh8ld -n kubernetes-dashboard: context deadline exceeded
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: timed out waiting for the condition
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220921220832-10174
helpers_test.go:235: (dbg) docker inspect no-preload-20220921220832-10174:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d6359e799a3f8bf5c7871e4f7257a82e60fa936a5d415f8dcf227027153b841e",
	        "Created": "2022-09-21T22:08:33.259074855Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 276819,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-09-21T22:21:22.389970999Z",
	            "FinishedAt": "2022-09-21T22:21:20.752642361Z"
	        },
	        "Image": "sha256:5f58fddaff4349397c9f51a6b73926a9b118af22b4ccb4492e84c74d0b59dcd4",
	        "ResolvConfPath": "/var/lib/docker/containers/d6359e799a3f8bf5c7871e4f7257a82e60fa936a5d415f8dcf227027153b841e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d6359e799a3f8bf5c7871e4f7257a82e60fa936a5d415f8dcf227027153b841e/hostname",
	        "HostsPath": "/var/lib/docker/containers/d6359e799a3f8bf5c7871e4f7257a82e60fa936a5d415f8dcf227027153b841e/hosts",
	        "LogPath": "/var/lib/docker/containers/d6359e799a3f8bf5c7871e4f7257a82e60fa936a5d415f8dcf227027153b841e/d6359e799a3f8bf5c7871e4f7257a82e60fa936a5d415f8dcf227027153b841e-json.log",
	        "Name": "/no-preload-20220921220832-10174",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-20220921220832-10174:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-20220921220832-10174",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/0d97efec5bb72252d948800969bf3c9c07c6335302b779ac376f906a108bc7bf-init/diff:/var/lib/docker/overlay2/e464f9a636cb97832b8aa5685163afb26b14d037782877c3831e0f261b6fb12b/diff:/var/lib/docker/overlay2/6f2f230003f0711dfcff3390931e14f9abd39be1cd2b674079cb6a0fc99dab7f/diff:/var/lib/docker/overlay2/a539506727a1471a83b57694b6923a137699d9d47fe00601ffa27c34d63d17c2/diff:/var/lib/docker/overlay2/7dc48fda616494e9959c05191521a1f5bdf6be0555a35fc1d7ce85b87a0522ad/diff:/var/lib/docker/overlay2/6a1c06da85f8c03c9712693ca8a75d804e7b9f135a6b92e3929385cc791b0b3a/diff:/var/lib/docker/overlay2/d178ee6cf0c027c7943422dc71b38eb93ee131daede423f6cf283424eaebc517/diff:/var/lib/docker/overlay2/f4c8d81a89934a29db99d9ed4b4542a8882465ab2b6419ba4a3fe09214d25d9e/diff:/var/lib/docker/overlay2/4411575e37772974371f716b7c4abb3053138ddcaf53d883eb3116b4c3a37f78/diff:/var/lib/docker/overlay2/a3f8db0c0d790efcede2f41e470d293e280b022ff0bbb46c8bc25f190e3146fc/diff:/var/lib/docker/overlay2/a5bfd3
703378d6d5b2622598a13b4b2eb9203659c1ffff9aa93944ef8d20d22c/diff:/var/lib/docker/overlay2/3c24c9c8a37c1a488d12209df991b3857ccbc448a3329e46a8d48fc28ef81b83/diff:/var/lib/docker/overlay2/199771963b33bc809e912a6d428c556897f0ba04db2268e08695cb7ce0ee3bad/diff:/var/lib/docker/overlay2/4bc13570e456a5d261fbaf68140f2e5b2ea10c6683623858e16ee3ed1b117ef7/diff:/var/lib/docker/overlay2/9831fdfd44eff7e3f4780d10f5480f090b388735bb188e7a1e93251b7769d8f3/diff:/var/lib/docker/overlay2/28126fe31ddfbf99d4588c5ac750503b9b6cee8f2e00781941cff5f4323d1c76/diff:/var/lib/docker/overlay2/173221e8ff5f6ec22870429dc8bb01272ade638fe47275d1daa1ce07952c8c8c/diff:/var/lib/docker/overlay2/53bc2e4fa25c7c694765cf4d57e59552b26eed732ba59b8d74b221ea0ada7479/diff:/var/lib/docker/overlay2/0175a1cb7e1c92247b6cbac270da49d811d3508e193c1c2e47bef8cb769a47bf/diff:/var/lib/docker/overlay2/1651afeb98cf83af553f3e0a4dcb564af84cb440147702fddc0133cdae08e879/diff:/var/lib/docker/overlay2/64c0563be8f8902fe0e9d207f839be98da0e23d6d839403a7e5498f0468e6b75/diff:/var/lib/d
ocker/overlay2/c9ce1f7c22e681e0e279ce7656144b4c75e2d24f7297acad69e621f2756b75e8/diff:/var/lib/docker/overlay2/469e6b8bbc078383af3dc64c254906774ab56a833c4de1dd39b46bfa27fee3b0/diff:/var/lib/docker/overlay2/797536b923ada4261904843a51188063c401089eae5fec971f1dfb3c21d000a8/diff:/var/lib/docker/overlay2/9d5cbab97d75347795fc5814806362514e12ffc998b48411ecf1d23f980badf6/diff:/var/lib/docker/overlay2/8681f41e3df379cfdc8fd52b2e5837512b8e36a64c87fb159548a876d16e691d/diff:/var/lib/docker/overlay2/78ea1917a0aca968f65d796cbc2ee95d7d190a700496b9e47b29884fe13b1bec/diff:/var/lib/docker/overlay2/c3c231a74b7992f1fdc04b623c10d7791eab4fd97c803ed2610d595097c682c2/diff:/var/lib/docker/overlay2/f7b38a2a87d3958318f3a4b244e33d8096cd001a8fcb916eec022b0679382900/diff:/var/lib/docker/overlay2/1107be3397c1dfc460decaf307982d427cc431beb7938fb0e78f4f7cbc0afd3b/diff:/var/lib/docker/overlay2/59319d0c4cc381deff6e51ba1cc718b7cfd4fc557e74b8e83bd228afca501c8c/diff:/var/lib/docker/overlay2/9d031408a1ab9825246b7bba81db2596cb92e70fbedc70fba9be1d25320
8529f/diff:/var/lib/docker/overlay2/5800b717c814431baf3cc17520b099e7e011915cdeb46ed6360ec2f831a926ec/diff:/var/lib/docker/overlay2/6660d8dfc6dbb7f41f871ff7ec7e0fdfdb2ca3480141cdd4cce92869cb5382bf/diff:/var/lib/docker/overlay2/525a566b79110ff18e78880f2652b8d9cfcfbdbf3272fedb650933fcbbf869bd/diff:/var/lib/docker/overlay2/a0730aa8e0952fc661048d3a7f986ff246a5b2a29ad4e8195970bb407e3cecfc/diff:/var/lib/docker/overlay2/f9d25765ae914f5f013b5f46292f03c39d82f675d2102904f5fabecf07189337/diff:/var/lib/docker/overlay2/b7b34d126964ff60bc0fb625da7343e53e8bb4e77d5e72e33257c28b3d395ca3/diff:/var/lib/docker/overlay2/8514d6cc3775ebdd86f683a78503419bc97346c0059ac0d48563d2edec9727f0/diff:/var/lib/docker/overlay2/dd6f7bead5718d42040b4ee90ce09fa15720d04f3c58e2964b1d5b241bf5b7ed/diff:/var/lib/docker/overlay2/1a8adeb0355a42e9284e241b527930f4f6b7108d459c10550d5d0c4451f9e924/diff:/var/lib/docker/overlay2/703991449e0070d93945e4435da56144ce54e461e877e0b084144663d46b10f6/diff:/var/lib/docker/overlay2/65855711cfa445d374537e6268fd1bea2f62ea
bf2f590842933254b4755050dd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0d97efec5bb72252d948800969bf3c9c07c6335302b779ac376f906a108bc7bf/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0d97efec5bb72252d948800969bf3c9c07c6335302b779ac376f906a108bc7bf/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0d97efec5bb72252d948800969bf3c9c07c6335302b779ac376f906a108bc7bf/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-20220921220832-10174",
	                "Source": "/var/lib/docker/volumes/no-preload-20220921220832-10174/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-20220921220832-10174",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-20220921220832-10174",
	                "name.minikube.sigs.k8s.io": "no-preload-20220921220832-10174",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "80be6817ec09ec1e98145a8a646af11f4f74d4ba59d85211dcfab6cba5a3401d",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49438"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49437"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49434"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49436"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49435"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/80be6817ec09",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-20220921220832-10174": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "d6359e799a3f",
	                        "no-preload-20220921220832-10174"
	                    ],
	                    "NetworkID": "40cb175bb75cdb2ff8ee942229fbc7e22e0ed7651da5bae77cd3dd1e2f70c5e3",
	                    "EndpointID": "e7b2dfb5c43b9948e24c210d676d20bdba88c008cdb5f205fd56c5ca5e54225a",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:5e:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220921220832-10174 -n no-preload-20220921220832-10174
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-20220921220832-10174 logs -n 25
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| Command |                            Args                            |                     Profile                     |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p                                                         | old-k8s-version-20220921220722-10174            | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | old-k8s-version-20220921220722-10174                       |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | old-k8s-version-20220921220722-10174            | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | old-k8s-version-20220921220722-10174                       |                                                 |         |         |                     |                     |
	| start   | -p newest-cni-20220921221720-10174 --memory=2200           | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.2                               |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | embed-certs-20220921220439-10174                | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | embed-certs-20220921220439-10174                           |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                     |                     |
	| stop    | -p                                                         | embed-certs-20220921220439-10174                | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | embed-certs-20220921220439-10174                           |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | embed-certs-20220921220439-10174                | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | embed-certs-20220921220439-10174                           |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                     |                     |
	| start   | -p                                                         | embed-certs-20220921220439-10174                | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC |                     |
	|         | embed-certs-20220921220439-10174                           |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                     |                     |
	|         | --wait=true --embed-certs                                  |                                                 |         |         |                     |                     |
	|         | --driver=docker                                            |                                                 |         |         |                     |                     |
	|         | --container-runtime=containerd                             |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.2                               |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | newest-cni-20220921221720-10174                            |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                     |                     |
	| stop    | -p                                                         | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | newest-cni-20220921221720-10174                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | newest-cni-20220921221720-10174                            |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                     |                     |
	| start   | -p newest-cni-20220921221720-10174 --memory=2200           | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:18 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.2                               |                                                 |         |         |                     |                     |
	| ssh     | -p                                                         | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:18 UTC | 21 Sep 22 22:18 UTC |
	|         | newest-cni-20220921221720-10174                            |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                     |                     |
	| pause   | -p                                                         | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:18 UTC | 21 Sep 22 22:18 UTC |
	|         | newest-cni-20220921221720-10174                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| unpause | -p                                                         | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:18 UTC | 21 Sep 22 22:18 UTC |
	|         | newest-cni-20220921221720-10174                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:18 UTC | 21 Sep 22 22:18 UTC |
	|         | newest-cni-20220921221720-10174                            |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:18 UTC | 21 Sep 22 22:18 UTC |
	|         | newest-cni-20220921221720-10174                            |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | no-preload-20220921220832-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:21 UTC | 21 Sep 22 22:21 UTC |
	|         | no-preload-20220921220832-10174                            |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                     |                     |
	| stop    | -p                                                         | no-preload-20220921220832-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:21 UTC | 21 Sep 22 22:21 UTC |
	|         | no-preload-20220921220832-10174                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | no-preload-20220921220832-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:21 UTC | 21 Sep 22 22:21 UTC |
	|         | no-preload-20220921220832-10174                            |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                     |                     |
	| start   | -p                                                         | no-preload-20220921220832-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:21 UTC |                     |
	|         | no-preload-20220921220832-10174                            |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                     |                     |
	|         | --wait=true --preload=false                                |                                                 |         |         |                     |                     |
	|         | --driver=docker                                            |                                                 |         |         |                     |                     |
	|         | --container-runtime=containerd                             |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.2                               |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | default-k8s-different-port-20220921221118-10174 | jenkins | v1.27.0 | 21 Sep 22 22:23 UTC | 21 Sep 22 22:23 UTC |
	|         | default-k8s-different-port-20220921221118-10174            |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                     |                     |
	| stop    | -p                                                         | default-k8s-different-port-20220921221118-10174 | jenkins | v1.27.0 | 21 Sep 22 22:23 UTC | 21 Sep 22 22:24 UTC |
	|         | default-k8s-different-port-20220921221118-10174            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | default-k8s-different-port-20220921221118-10174 | jenkins | v1.27.0 | 21 Sep 22 22:24 UTC | 21 Sep 22 22:24 UTC |
	|         | default-k8s-different-port-20220921221118-10174            |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                     |                     |
	| start   | -p                                                         | default-k8s-different-port-20220921221118-10174 | jenkins | v1.27.0 | 21 Sep 22 22:24 UTC |                     |
	|         | default-k8s-different-port-20220921221118-10174            |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true                |                                                 |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker                      |                                                 |         |         |                     |                     |
	|         |  --container-runtime=containerd                            |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.2                               |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | embed-certs-20220921220439-10174                | jenkins | v1.27.0 | 21 Sep 22 22:35 UTC | 21 Sep 22 22:35 UTC |
	|         | embed-certs-20220921220439-10174                           |                                                 |         |         |                     |                     |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/09/21 22:24:01
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.19.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0921 22:24:01.692796  283599 out.go:296] Setting OutFile to fd 1 ...
	I0921 22:24:01.693211  283599 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:24:01.693232  283599 out.go:309] Setting ErrFile to fd 2...
	I0921 22:24:01.693240  283599 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:24:01.693504  283599 root.go:333] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/bin
	I0921 22:24:01.694665  283599 out.go:303] Setting JSON to false
	I0921 22:24:01.696140  283599 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3993,"bootTime":1663795049,"procs":467,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1017-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0921 22:24:01.696247  283599 start.go:125] virtualization: kvm guest
	I0921 22:24:01.698874  283599 out.go:177] * [default-k8s-different-port-20220921221118-10174] minikube v1.27.0 on Ubuntu 20.04 (kvm/amd64)
	I0921 22:24:01.701214  283599 out.go:177]   - MINIKUBE_LOCATION=14995
	I0921 22:24:01.701128  283599 notify.go:214] Checking for updates...
	I0921 22:24:01.703092  283599 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0921 22:24:01.704791  283599 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
	I0921 22:24:01.706544  283599 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube
	I0921 22:24:01.708172  283599 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0921 22:23:57.318050  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:23:59.318317  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:01.710349  283599 config.go:180] Loaded profile config "default-k8s-different-port-20220921221118-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.2
	I0921 22:24:01.710930  283599 driver.go:365] Setting default libvirt URI to qemu:///system
	I0921 22:24:01.744026  283599 docker.go:137] docker version: linux-20.10.18
	I0921 22:24:01.744136  283599 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:24:01.840732  283599 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:true NGoroutines:44 SystemTime:2022-09-21 22:24:01.764457724 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1017-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.18 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0921 22:24:01.840851  283599 docker.go:254] overlay module found
	I0921 22:24:01.843051  283599 out.go:177] * Using the docker driver based on existing profile
	I0921 22:24:01.844347  283599 start.go:284] selected driver: docker
	I0921 22:24:01.844371  283599 start.go:808] validating driver "docker" against &{Name:default-k8s-different-port-20220921221118-10174 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:default-k8s-different-port-20220921221118-10174 Na
mespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountSt
ring:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 22:24:01.844475  283599 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0921 22:24:01.845300  283599 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:24:01.940944  283599 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:true NGoroutines:44 SystemTime:2022-09-21 22:24:01.86716064 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1017-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.18 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientI
nfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0921 22:24:01.941199  283599 start_flags.go:867] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0921 22:24:01.941223  283599 cni.go:95] Creating CNI manager for ""
	I0921 22:24:01.941231  283599 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0921 22:24:01.941249  283599 start_flags.go:316] config:
	{Name:default-k8s-different-port-20220921221118-10174 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:default-k8s-different-port-20220921221118-10174 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDo
main:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 22:24:01.944240  283599 out.go:177] * Starting control plane node default-k8s-different-port-20220921221118-10174 in cluster default-k8s-different-port-20220921221118-10174
	I0921 22:24:01.945596  283599 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0921 22:24:01.946905  283599 out.go:177] * Pulling base image ...
	I0921 22:24:01.948255  283599 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime containerd
	I0921 22:24:01.948306  283599 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.2-containerd-overlay2-amd64.tar.lz4
	I0921 22:24:01.948321  283599 cache.go:57] Caching tarball of preloaded images
	I0921 22:24:01.948361  283599 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	I0921 22:24:01.948572  283599 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0921 22:24:01.948588  283599 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.2 on containerd
	I0921 22:24:01.948702  283599 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/config.json ...
	I0921 22:24:01.976413  283599 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon, skipping pull
	I0921 22:24:01.976445  283599 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c exists in daemon, skipping load
	I0921 22:24:01.976457  283599 cache.go:208] Successfully downloaded all kic artifacts
	I0921 22:24:01.976502  283599 start.go:364] acquiring machines lock for default-k8s-different-port-20220921221118-10174: {Name:mk6a2906d520bc1db61074ef435cf249d094e940 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:24:01.976622  283599 start.go:368] acquired machines lock for "default-k8s-different-port-20220921221118-10174" in 78.111µs
	I0921 22:24:01.976652  283599 start.go:96] Skipping create...Using existing machine configuration
	I0921 22:24:01.976660  283599 fix.go:55] fixHost starting: 
	I0921 22:24:01.976899  283599 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221118-10174 --format={{.State.Status}}
	I0921 22:24:02.002084  283599 fix.go:103] recreateIfNeeded on default-k8s-different-port-20220921221118-10174: state=Stopped err=<nil>
	W0921 22:24:02.002122  283599 fix.go:129] unexpected machine state, will restart: <nil>
	I0921 22:24:02.004632  283599 out.go:177] * Restarting existing docker container for "default-k8s-different-port-20220921221118-10174" ...
	I0921 22:24:00.289698  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:02.790230  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:02.006307  283599 cli_runner.go:164] Run: docker start default-k8s-different-port-20220921221118-10174
	I0921 22:24:02.358108  283599 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221118-10174 --format={{.State.Status}}
	I0921 22:24:02.385298  283599 kic.go:415] container "default-k8s-different-port-20220921221118-10174" state is running.
	I0921 22:24:02.385684  283599 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220921221118-10174
	I0921 22:24:02.412757  283599 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/config.json ...
	I0921 22:24:02.412997  283599 machine.go:88] provisioning docker machine ...
	I0921 22:24:02.413031  283599 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20220921221118-10174"
	I0921 22:24:02.413108  283599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:24:02.438229  283599 main.go:134] libmachine: Using SSH client type: native
	I0921 22:24:02.438400  283599 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ec520] 0x7ef6a0 <nil>  [] 0s} 127.0.0.1 49443 <nil> <nil>}
	I0921 22:24:02.438416  283599 main.go:134] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20220921221118-10174 && echo "default-k8s-different-port-20220921221118-10174" | sudo tee /etc/hostname
	I0921 22:24:02.439038  283599 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34230->127.0.0.1:49443: read: connection reset by peer
	I0921 22:24:05.584682  283599 main.go:134] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20220921221118-10174
	
	I0921 22:24:05.584766  283599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:24:05.608825  283599 main.go:134] libmachine: Using SSH client type: native
	I0921 22:24:05.609026  283599 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ec520] 0x7ef6a0 <nil>  [] 0s} 127.0.0.1 49443 <nil> <nil>}
	I0921 22:24:05.609059  283599 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20220921221118-10174' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20220921221118-10174/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20220921221118-10174' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0921 22:24:05.739656  283599 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0921 22:24:05.739694  283599 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube}
	I0921 22:24:05.739749  283599 ubuntu.go:177] setting up certificates
	I0921 22:24:05.739765  283599 provision.go:83] configureAuth start
	I0921 22:24:05.739824  283599 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220921221118-10174
	I0921 22:24:05.764789  283599 provision.go:138] copyHostCerts
	I0921 22:24:05.764839  283599 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.pem, removing ...
	I0921 22:24:05.764846  283599 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.pem
	I0921 22:24:05.764904  283599 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.pem (1078 bytes)
	I0921 22:24:05.764993  283599 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cert.pem, removing ...
	I0921 22:24:05.765005  283599 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cert.pem
	I0921 22:24:05.765028  283599 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cert.pem (1123 bytes)
	I0921 22:24:05.765086  283599 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/key.pem, removing ...
	I0921 22:24:05.765095  283599 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/key.pem
	I0921 22:24:05.765118  283599 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/key.pem (1675 bytes)
	I0921 22:24:05.765169  283599 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20220921221118-10174 san=[192.168.85.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20220921221118-10174]
	I0921 22:24:05.914466  283599 provision.go:172] copyRemoteCerts
	I0921 22:24:05.914534  283599 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0921 22:24:05.914564  283599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:24:05.939805  283599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49443 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa Username:docker}
	I0921 22:24:06.031315  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0921 22:24:06.048618  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server.pem --> /etc/docker/server.pem (1310 bytes)
	I0921 22:24:06.065530  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0921 22:24:06.083800  283599 provision.go:86] duration metric: configureAuth took 344.021748ms
	I0921 22:24:06.083828  283599 ubuntu.go:193] setting minikube options for container-runtime
	I0921 22:24:06.083988  283599 config.go:180] Loaded profile config "default-k8s-different-port-20220921221118-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.2
	I0921 22:24:06.083999  283599 machine.go:91] provisioned docker machine in 3.670987023s
	I0921 22:24:06.084006  283599 start.go:300] post-start starting for "default-k8s-different-port-20220921221118-10174" (driver="docker")
	I0921 22:24:06.084012  283599 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0921 22:24:06.084049  283599 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0921 22:24:06.084088  283599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:24:06.108286  283599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49443 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa Username:docker}
	I0921 22:24:06.203139  283599 ssh_runner.go:195] Run: cat /etc/os-release
	I0921 22:24:06.205811  283599 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0921 22:24:06.205839  283599 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0921 22:24:06.205852  283599 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0921 22:24:06.205864  283599 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0921 22:24:06.205880  283599 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/addons for local assets ...
	I0921 22:24:06.205944  283599 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files for local assets ...
	I0921 22:24:06.206037  283599 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/101742.pem -> 101742.pem in /etc/ssl/certs
	I0921 22:24:06.206142  283599 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0921 22:24:06.212569  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/101742.pem --> /etc/ssl/certs/101742.pem (1708 bytes)
	I0921 22:24:06.229418  283599 start.go:303] post-start completed in 145.398445ms
	I0921 22:24:06.229483  283599 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:24:06.229517  283599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:24:06.253305  283599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49443 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa Username:docker}
	I0921 22:24:06.340119  283599 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:24:06.344050  283599 fix.go:57] fixHost completed within 4.367385464s
	I0921 22:24:06.344071  283599 start.go:83] releasing machines lock for "default-k8s-different-port-20220921221118-10174", held for 4.367430848s
	I0921 22:24:06.344157  283599 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220921221118-10174
	I0921 22:24:06.368445  283599 ssh_runner.go:195] Run: systemctl --version
	I0921 22:24:06.368501  283599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:24:06.368505  283599 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0921 22:24:06.368550  283599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:24:06.394444  283599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49443 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa Username:docker}
	I0921 22:24:06.396066  283599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49443 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa Username:docker}
	I0921 22:24:06.513229  283599 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0921 22:24:06.524587  283599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0921 22:24:06.533746  283599 docker.go:188] disabling docker service ...
	I0921 22:24:06.533795  283599 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0921 22:24:06.543075  283599 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0921 22:24:06.551813  283599 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0921 22:24:06.629483  283599 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0921 22:24:01.818416  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:04.317912  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:05.288966  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:07.290168  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:06.707030  283599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0921 22:24:06.717244  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0921 22:24:06.729638  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "registry.k8s.io/pause:3.8"|' -i /etc/containerd/config.toml"
	I0921 22:24:06.737194  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0921 22:24:06.744928  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0921 22:24:06.752650  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0921 22:24:06.760419  283599 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0921 22:24:06.766584  283599 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0921 22:24:06.772903  283599 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0921 22:24:06.844578  283599 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0921 22:24:06.917291  283599 start.go:450] Will wait 60s for socket path /run/containerd/containerd.sock
	I0921 22:24:06.917353  283599 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0921 22:24:06.921118  283599 start.go:471] Will wait 60s for crictl version
	I0921 22:24:06.921184  283599 ssh_runner.go:195] Run: sudo crictl version
	I0921 22:24:06.948257  283599 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-09-21T22:24:06Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0921 22:24:06.817672  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:09.317278  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:11.317829  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:09.789185  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:12.289080  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:13.817410  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:15.817496  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:17.995620  283599 ssh_runner.go:195] Run: sudo crictl version
	I0921 22:24:18.018705  283599 start.go:480] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.8
	RuntimeApiVersion:  v1alpha2
	I0921 22:24:18.018768  283599 ssh_runner.go:195] Run: containerd --version
	I0921 22:24:18.047337  283599 ssh_runner.go:195] Run: containerd --version
	I0921 22:24:18.078051  283599 out.go:177] * Preparing Kubernetes v1.25.2 on containerd 1.6.8 ...
	I0921 22:24:14.289667  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:16.789199  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:18.079491  283599 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220921221118-10174 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 22:24:18.103308  283599 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0921 22:24:18.106553  283599 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0921 22:24:18.115993  283599 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime containerd
	I0921 22:24:18.116056  283599 ssh_runner.go:195] Run: sudo crictl images --output json
	I0921 22:24:18.139896  283599 containerd.go:553] all images are preloaded for containerd runtime.
	I0921 22:24:18.139921  283599 containerd.go:467] Images already preloaded, skipping extraction
	I0921 22:24:18.139964  283599 ssh_runner.go:195] Run: sudo crictl images --output json
	I0921 22:24:18.163323  283599 containerd.go:553] all images are preloaded for containerd runtime.
	I0921 22:24:18.163344  283599 cache_images.go:84] Images are preloaded, skipping loading
	I0921 22:24:18.163382  283599 ssh_runner.go:195] Run: sudo crictl info
	I0921 22:24:18.186911  283599 cni.go:95] Creating CNI manager for ""
	I0921 22:24:18.186935  283599 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0921 22:24:18.186948  283599 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0921 22:24:18.186961  283599 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.25.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20220921221118-10174 NodeName:default-k8s-different-port-20220921221118-10174 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgr
oupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
	I0921 22:24:18.187074  283599 kubeadm.go:161] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "default-k8s-different-port-20220921221118-10174"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0921 22:24:18.187152  283599 kubeadm.go:962] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=default-k8s-different-port-20220921221118-10174 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.2 ClusterName:default-k8s-different-port-20220921221118-10174 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0921 22:24:18.187196  283599 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.2
	I0921 22:24:18.194012  283599 binaries.go:44] Found k8s binaries, skipping transfer
	I0921 22:24:18.194081  283599 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0921 22:24:18.200606  283599 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (540 bytes)
	I0921 22:24:18.212899  283599 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0921 22:24:18.224754  283599 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2076 bytes)
	I0921 22:24:18.236775  283599 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0921 22:24:18.239439  283599 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0921 22:24:18.248263  283599 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174 for IP: 192.168.85.2
	I0921 22:24:18.248377  283599 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.key
	I0921 22:24:18.248421  283599 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/proxy-client-ca.key
	I0921 22:24:18.248485  283599 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/client.key
	I0921 22:24:18.248538  283599 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/apiserver.key.43b9df8c
	I0921 22:24:18.248575  283599 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/proxy-client.key
	I0921 22:24:18.248658  283599 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/10174.pem (1338 bytes)
	W0921 22:24:18.248689  283599 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/10174_empty.pem, impossibly tiny 0 bytes
	I0921 22:24:18.248705  283599 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca-key.pem (1679 bytes)
	I0921 22:24:18.248729  283599 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem (1078 bytes)
	I0921 22:24:18.248758  283599 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/cert.pem (1123 bytes)
	I0921 22:24:18.248780  283599 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/key.pem (1675 bytes)
	I0921 22:24:18.248846  283599 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/101742.pem (1708 bytes)
	I0921 22:24:18.249439  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0921 22:24:18.265894  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0921 22:24:18.282128  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0921 22:24:18.298690  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0921 22:24:18.315323  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0921 22:24:18.331842  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0921 22:24:18.348196  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0921 22:24:18.364368  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0921 22:24:18.380401  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0921 22:24:18.396696  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/10174.pem --> /usr/share/ca-certificates/10174.pem (1338 bytes)
	I0921 22:24:18.413238  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/101742.pem --> /usr/share/ca-certificates/101742.pem (1708 bytes)
	I0921 22:24:18.429482  283599 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0921 22:24:18.441654  283599 ssh_runner.go:195] Run: openssl version
	I0921 22:24:18.446184  283599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0921 22:24:18.453215  283599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0921 22:24:18.456119  283599 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Sep 21 21:28 /usr/share/ca-certificates/minikubeCA.pem
	I0921 22:24:18.456166  283599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0921 22:24:18.460690  283599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0921 22:24:18.467196  283599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10174.pem && ln -fs /usr/share/ca-certificates/10174.pem /etc/ssl/certs/10174.pem"
	I0921 22:24:18.474449  283599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10174.pem
	I0921 22:24:18.477401  283599 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Sep 21 21:32 /usr/share/ca-certificates/10174.pem
	I0921 22:24:18.477445  283599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10174.pem
	I0921 22:24:18.481956  283599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10174.pem /etc/ssl/certs/51391683.0"
	I0921 22:24:18.488418  283599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/101742.pem && ln -fs /usr/share/ca-certificates/101742.pem /etc/ssl/certs/101742.pem"
	I0921 22:24:18.495604  283599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/101742.pem
	I0921 22:24:18.498556  283599 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Sep 21 21:32 /usr/share/ca-certificates/101742.pem
	I0921 22:24:18.498600  283599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/101742.pem
	I0921 22:24:18.503245  283599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/101742.pem /etc/ssl/certs/3ec20f2e.0"
	I0921 22:24:18.509856  283599 kubeadm.go:396] StartCluster: {Name:default-k8s-different-port-20220921221118-10174 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:default-k8s-different-port-20220921221118-10174 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/
minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 22:24:18.509953  283599 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0921 22:24:18.509985  283599 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0921 22:24:18.533346  283599 cri.go:87] found id: "1058c41aafbb87ce5fc70eca7e064de03a01eb48e2da027b4515730997b03de5"
	I0921 22:24:18.533375  283599 cri.go:87] found id: "e1b3d54125fe2aff15e0a996ae0be62597db4065ff9a9d41a3b0e598b97b1608"
	I0921 22:24:18.533382  283599 cri.go:87] found id: "2654f64b12dee801006bf0c4b82dc6839422d58571d4bc74cee43fe7fdfe9b01"
	I0921 22:24:18.533388  283599 cri.go:87] found id: "1bb50adc50b7e339b9e7fee763126f6f17a621dcc78637bab78187b50e68bbf2"
	I0921 22:24:18.533393  283599 cri.go:87] found id: "9e931b83ea689a63f7a3861c1e0f7076f722e68890221c6209b05b3b852c12d7"
	I0921 22:24:18.533402  283599 cri.go:87] found id: "8a3d458869b0951d2fe3dd93e9d05a549b058f95f84b2ad0685292ebb3428767"
	I0921 22:24:18.533407  283599 cri.go:87] found id: ""
	I0921 22:24:18.533444  283599 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0921 22:24:18.545553  283599 cri.go:114] JSON = null
	W0921 22:24:18.545605  283599 kubeadm.go:403] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I0921 22:24:18.545686  283599 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0921 22:24:18.552635  283599 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I0921 22:24:18.552664  283599 kubeadm.go:627] restartCluster start
	I0921 22:24:18.552705  283599 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0921 22:24:18.558944  283599 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:18.559817  283599 kubeconfig.go:116] verify returned: extract IP: "default-k8s-different-port-20220921221118-10174" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
	I0921 22:24:18.560296  283599 kubeconfig.go:127] "default-k8s-different-port-20220921221118-10174" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig - will repair!
	I0921 22:24:18.561146  283599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig: {Name:mkdec5faf1bb182b50a3cd5458b352f38075dc15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:24:18.562655  283599 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0921 22:24:18.568841  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:18.568884  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:18.576584  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:18.776932  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:18.777023  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:18.786228  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:18.977461  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:18.977542  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:18.986186  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:19.177398  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:19.177487  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:19.186159  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:19.377453  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:19.377534  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:19.385921  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:19.577206  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:19.577296  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:19.586370  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:19.777572  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:19.777676  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:19.786797  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:19.977103  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:19.977188  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:19.985822  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:20.177132  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:20.177234  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:20.185876  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:20.377187  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:20.377298  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:20.386086  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:20.577399  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:20.577488  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:20.586142  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:20.777447  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:20.777527  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:20.786547  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:20.976769  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:20.976865  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:20.985682  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:21.176870  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:21.176951  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:21.185811  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:21.377116  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:21.377184  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:21.385829  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:21.577109  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:21.577202  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:21.585911  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:21.585933  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:21.585979  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:21.593866  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:21.593893  283599 kubeadm.go:602] needs reconfigure: apiserver error: timed out waiting for the condition
	I0921 22:24:21.593899  283599 kubeadm.go:1114] stopping kube-system containers ...
	I0921 22:24:21.593908  283599 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0921 22:24:21.593964  283599 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0921 22:24:21.618017  283599 cri.go:87] found id: "1058c41aafbb87ce5fc70eca7e064de03a01eb48e2da027b4515730997b03de5"
	I0921 22:24:21.618041  283599 cri.go:87] found id: "e1b3d54125fe2aff15e0a996ae0be62597db4065ff9a9d41a3b0e598b97b1608"
	I0921 22:24:21.618048  283599 cri.go:87] found id: "2654f64b12dee801006bf0c4b82dc6839422d58571d4bc74cee43fe7fdfe9b01"
	I0921 22:24:21.618058  283599 cri.go:87] found id: "1bb50adc50b7e339b9e7fee763126f6f17a621dcc78637bab78187b50e68bbf2"
	I0921 22:24:21.618064  283599 cri.go:87] found id: "9e931b83ea689a63f7a3861c1e0f7076f722e68890221c6209b05b3b852c12d7"
	I0921 22:24:21.618072  283599 cri.go:87] found id: "8a3d458869b0951d2fe3dd93e9d05a549b058f95f84b2ad0685292ebb3428767"
	I0921 22:24:21.618078  283599 cri.go:87] found id: ""
	I0921 22:24:21.618082  283599 cri.go:232] Stopping containers: [1058c41aafbb87ce5fc70eca7e064de03a01eb48e2da027b4515730997b03de5 e1b3d54125fe2aff15e0a996ae0be62597db4065ff9a9d41a3b0e598b97b1608 2654f64b12dee801006bf0c4b82dc6839422d58571d4bc74cee43fe7fdfe9b01 1bb50adc50b7e339b9e7fee763126f6f17a621dcc78637bab78187b50e68bbf2 9e931b83ea689a63f7a3861c1e0f7076f722e68890221c6209b05b3b852c12d7 8a3d458869b0951d2fe3dd93e9d05a549b058f95f84b2ad0685292ebb3428767]
	I0921 22:24:21.618118  283599 ssh_runner.go:195] Run: which crictl
	I0921 22:24:21.621347  283599 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop 1058c41aafbb87ce5fc70eca7e064de03a01eb48e2da027b4515730997b03de5 e1b3d54125fe2aff15e0a996ae0be62597db4065ff9a9d41a3b0e598b97b1608 2654f64b12dee801006bf0c4b82dc6839422d58571d4bc74cee43fe7fdfe9b01 1bb50adc50b7e339b9e7fee763126f6f17a621dcc78637bab78187b50e68bbf2 9e931b83ea689a63f7a3861c1e0f7076f722e68890221c6209b05b3b852c12d7 8a3d458869b0951d2fe3dd93e9d05a549b058f95f84b2ad0685292ebb3428767
	I0921 22:24:21.645531  283599 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0921 22:24:21.655622  283599 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0921 22:24:21.662408  283599 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Sep 21 22:11 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Sep 21 22:11 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2127 Sep 21 22:11 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Sep 21 22:11 /etc/kubernetes/scheduler.conf
	
	I0921 22:24:21.662459  283599 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0921 22:24:21.669029  283599 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0921 22:24:21.675699  283599 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0921 22:24:21.682316  283599 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:21.682358  283599 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0921 22:24:21.688501  283599 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0921 22:24:17.817867  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:19.818111  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:18.789856  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:21.289803  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:21.694928  283599 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:21.696684  283599 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0921 22:24:21.703329  283599 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0921 22:24:21.710109  283599 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0921 22:24:21.710132  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0921 22:24:21.757457  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0921 22:24:22.810948  283599 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.053458682s)
	I0921 22:24:22.810976  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0921 22:24:22.943243  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0921 22:24:22.995873  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0921 22:24:23.097694  283599 api_server.go:51] waiting for apiserver process to appear ...
	I0921 22:24:23.097766  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 22:24:23.608210  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 22:24:24.107567  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 22:24:24.187217  283599 api_server.go:71] duration metric: took 1.089523123s to wait for apiserver process to appear ...
	I0921 22:24:24.187296  283599 api_server.go:87] waiting for apiserver healthz status ...
	I0921 22:24:24.187323  283599 api_server.go:240] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I0921 22:24:24.187688  283599 api_server.go:256] stopped: https://192.168.85.2:8444/healthz: Get "https://192.168.85.2:8444/healthz": dial tcp 192.168.85.2:8444: connect: connection refused
	I0921 22:24:24.688449  283599 api_server.go:240] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I0921 22:24:22.317667  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:24.317872  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:23.789425  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:25.789684  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:27.790412  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:27.592182  283599 api_server.go:266] https://192.168.85.2:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0921 22:24:27.592315  283599 api_server.go:102] status: https://192.168.85.2:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0921 22:24:27.688579  283599 api_server.go:240] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I0921 22:24:27.694601  283599 api_server.go:266] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0921 22:24:27.694667  283599 api_server.go:102] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0921 22:24:28.187832  283599 api_server.go:240] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I0921 22:24:28.192979  283599 api_server.go:266] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0921 22:24:28.193004  283599 api_server.go:102] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0921 22:24:28.688623  283599 api_server.go:240] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I0921 22:24:28.695172  283599 api_server.go:266] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0921 22:24:28.695285  283599 api_server.go:102] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0921 22:24:29.187841  283599 api_server.go:240] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I0921 22:24:29.193157  283599 api_server.go:266] https://192.168.85.2:8444/healthz returned 200:
	ok
	I0921 22:24:29.198775  283599 api_server.go:140] control plane version: v1.25.2
	I0921 22:24:29.198796  283599 api_server.go:130] duration metric: took 5.011488882s to wait for apiserver health ...
	I0921 22:24:29.198805  283599 cni.go:95] Creating CNI manager for ""
	I0921 22:24:29.198812  283599 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0921 22:24:29.201314  283599 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0921 22:24:29.202798  283599 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0921 22:24:29.206616  283599 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.2/kubectl ...
	I0921 22:24:29.206636  283599 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0921 22:24:29.221913  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0921 22:24:29.826767  283599 system_pods.go:43] waiting for kube-system pods to appear ...
	I0921 22:24:29.834488  283599 system_pods.go:59] 9 kube-system pods found
	I0921 22:24:29.834517  283599 system_pods.go:61] "coredns-565d847f94-mrkjn" [7f364c47-74ce-4271-aab1-67bba320c586] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0921 22:24:29.834528  283599 system_pods.go:61] "etcd-default-k8s-different-port-20220921221118-10174" [8f0f58a7-7eae-43db-840f-bde95464e94e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0921 22:24:29.834533  283599 system_pods.go:61] "kindnet-7wbpp" [3f16ae0b-2f66-4f1e-b234-74570472a7f8] Running
	I0921 22:24:29.834539  283599 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220921221118-10174" [3a935d6b-ca77-4bcb-ae19-0a2af77c12a1] Running
	I0921 22:24:29.834544  283599 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220921221118-10174" [d01ee91a-5587-48e9-a235-68a73d5fedef] Running
	I0921 22:24:29.834549  283599 system_pods.go:61] "kube-proxy-lzphc" [611dbd37-0771-41b2-b886-93f46d79f802] Running
	I0921 22:24:29.834554  283599 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220921221118-10174" [998713da-f133-43f7-9f11-c6110ad66c8d] Running
	I0921 22:24:29.834561  283599 system_pods.go:61] "metrics-server-5c8fd5cf8-sshzh" [5972fae5-09c2-4e2e-b609-ef85f72311e4] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0921 22:24:29.834572  283599 system_pods.go:61] "storage-provisioner" [ca16dea1-fb3d-4cc1-b449-2236aefcc627] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0921 22:24:29.834577  283599 system_pods.go:74] duration metric: took 7.786123ms to wait for pod list to return data ...
	I0921 22:24:29.834588  283599 node_conditions.go:102] verifying NodePressure condition ...
	I0921 22:24:29.837059  283599 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0921 22:24:29.837085  283599 node_conditions.go:123] node cpu capacity is 8
	I0921 22:24:29.837096  283599 node_conditions.go:105] duration metric: took 2.500371ms to run NodePressure ...
	I0921 22:24:29.837121  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0921 22:24:30.025715  283599 kubeadm.go:763] waiting for restarted kubelet to initialise ...
	I0921 22:24:30.029542  283599 kubeadm.go:778] kubelet initialised
	I0921 22:24:30.029565  283599 kubeadm.go:779] duration metric: took 3.826857ms waiting for restarted kubelet to initialise ...
	I0921 22:24:30.029572  283599 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0921 22:24:30.034316  283599 pod_ready.go:78] waiting up to 4m0s for pod "coredns-565d847f94-mrkjn" in "kube-system" namespace to be "Ready" ...
	I0921 22:24:26.817684  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:29.317793  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:31.318001  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:30.289213  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:32.789335  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:32.039865  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:34.040511  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:36.539322  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:33.817371  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:35.817456  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:34.789530  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:37.289284  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:38.539700  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:41.040333  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:37.817967  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:40.318244  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:39.789967  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:42.289726  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:43.539636  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:45.540134  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:42.817716  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:44.818139  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:44.789355  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:47.288847  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:48.040425  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:50.539475  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:47.317825  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:49.318211  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:49.289182  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:51.289938  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:52.539590  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:54.540310  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:51.817491  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:53.818080  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:55.818165  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:53.789719  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:56.289013  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:57.040311  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:59.539775  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:58.318151  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:00.318254  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:58.289251  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:00.789124  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:02.789910  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:02.040207  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:04.540336  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:02.817283  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:04.817911  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:05.290121  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:07.789553  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:07.039774  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:09.039928  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:11.040136  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:07.318317  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:09.817957  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:10.289528  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:12.789110  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:13.540022  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:16.040513  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:12.317490  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:14.818433  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:14.789413  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:16.789947  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:18.539457  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:21.040423  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:17.317880  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:19.817565  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:19.289330  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:21.789335  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:23.539701  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:26.039677  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:22.317640  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:24.318075  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:23.789488  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:25.789726  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:28.539400  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:30.540154  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:26.817737  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:28.818270  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:31.318310  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:28.289323  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:30.789442  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:32.789667  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:33.039502  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:35.039801  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:33.318392  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:35.818247  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:34.790488  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:37.288758  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:37.539221  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:39.539681  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:41.539999  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:38.317564  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:40.317641  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:39.289052  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:41.789424  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:44.040284  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:46.540320  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:42.818080  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:45.317732  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:44.289331  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:46.789866  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:49.039837  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:51.540123  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:47.817565  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:49.314620  276511 pod_ready.go:81] duration metric: took 4m0.002300536s waiting for pod "coredns-565d847f94-m8xgt" in "kube-system" namespace to be "Ready" ...
	E0921 22:25:49.314670  276511 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-565d847f94-m8xgt" in "kube-system" namespace to be "Ready" (will not retry!)
	I0921 22:25:49.314692  276511 pod_ready.go:38] duration metric: took 4m0.007078344s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0921 22:25:49.314717  276511 kubeadm.go:631] restartCluster took 4m10.710033944s
	W0921 22:25:49.314858  276511 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0921 22:25:49.314887  276511 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0921 22:25:49.289362  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:51.789574  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:54.040292  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:56.540637  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:52.154431  276511 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (2.839517184s)
	I0921 22:25:52.154487  276511 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0921 22:25:52.163969  276511 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0921 22:25:52.170969  276511 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0921 22:25:52.171027  276511 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0921 22:25:52.177996  276511 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0921 22:25:52.178063  276511 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0921 22:25:52.213969  276511 kubeadm.go:317] W0921 22:25:52.213140    3321 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0921 22:25:52.246713  276511 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1017-gcp\n", err: exit status 1
	I0921 22:25:52.310910  276511 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0921 22:25:54.288796  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:56.289801  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:26:01.184243  276511 kubeadm.go:317] [init] Using Kubernetes version: v1.25.2
	I0921 22:26:01.184314  276511 kubeadm.go:317] [preflight] Running pre-flight checks
	I0921 22:26:01.184416  276511 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I0921 22:26:01.184507  276511 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1017-gcp
	I0921 22:26:01.184592  276511 kubeadm.go:317] OS: Linux
	I0921 22:26:01.184673  276511 kubeadm.go:317] CGROUPS_CPU: enabled
	I0921 22:26:01.184737  276511 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I0921 22:26:01.184793  276511 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I0921 22:26:01.184856  276511 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I0921 22:26:01.184921  276511 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I0921 22:26:01.184985  276511 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I0921 22:26:01.185046  276511 kubeadm.go:317] CGROUPS_PIDS: enabled
	I0921 22:26:01.185099  276511 kubeadm.go:317] CGROUPS_HUGETLB: enabled
	I0921 22:26:01.185157  276511 kubeadm.go:317] CGROUPS_BLKIO: enabled
	I0921 22:26:01.185254  276511 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0921 22:26:01.185380  276511 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0921 22:26:01.185526  276511 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0921 22:26:01.185623  276511 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0921 22:26:01.187463  276511 out.go:204]   - Generating certificates and keys ...
	I0921 22:26:01.187540  276511 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0921 22:26:01.187594  276511 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0921 22:26:01.187659  276511 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0921 22:26:01.187785  276511 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I0921 22:26:01.187900  276511 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I0921 22:26:01.187958  276511 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I0921 22:26:01.188014  276511 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I0921 22:26:01.188086  276511 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I0921 22:26:01.188221  276511 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0921 22:26:01.188336  276511 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0921 22:26:01.188409  276511 kubeadm.go:317] [certs] Using the existing "sa" key
	I0921 22:26:01.188488  276511 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0921 22:26:01.188556  276511 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0921 22:26:01.188636  276511 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0921 22:26:01.188731  276511 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0921 22:26:01.188817  276511 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0921 22:26:01.188953  276511 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0921 22:26:01.189087  276511 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0921 22:26:01.189191  276511 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I0921 22:26:01.189310  276511 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0921 22:26:01.191284  276511 out.go:204]   - Booting up control plane ...
	I0921 22:26:01.191385  276511 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0921 22:26:01.191486  276511 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0921 22:26:01.191561  276511 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0921 22:26:01.191748  276511 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0921 22:26:01.191985  276511 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0921 22:26:01.192105  276511 kubeadm.go:317] [apiclient] All control plane components are healthy after 6.503275 seconds
	I0921 22:26:01.192289  276511 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0921 22:26:01.192460  276511 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0921 22:26:01.192545  276511 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs
	I0921 22:26:01.192839  276511 kubeadm.go:317] [mark-control-plane] Marking the node no-preload-20220921220832-10174 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0921 22:26:01.192906  276511 kubeadm.go:317] [bootstrap-token] Using token: 9ldpwz.b05pw96cyce3l1nr
	I0921 22:26:01.194593  276511 out.go:204]   - Configuring RBAC rules ...
	I0921 22:26:01.194724  276511 kubeadm.go:317] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0921 22:26:01.194852  276511 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0921 22:26:01.195058  276511 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0921 22:26:01.195234  276511 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0921 22:26:01.195387  276511 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0921 22:26:01.195500  276511 kubeadm.go:317] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0921 22:26:01.195644  276511 kubeadm.go:317] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0921 22:26:01.195703  276511 kubeadm.go:317] [addons] Applied essential addon: CoreDNS
	I0921 22:26:01.195765  276511 kubeadm.go:317] [addons] Applied essential addon: kube-proxy
	I0921 22:26:01.195777  276511 kubeadm.go:317] 
	I0921 22:26:01.195861  276511 kubeadm.go:317] Your Kubernetes control-plane has initialized successfully!
	I0921 22:26:01.195872  276511 kubeadm.go:317] 
	I0921 22:26:01.195980  276511 kubeadm.go:317] To start using your cluster, you need to run the following as a regular user:
	I0921 22:26:01.196004  276511 kubeadm.go:317] 
	I0921 22:26:01.196036  276511 kubeadm.go:317]   mkdir -p $HOME/.kube
	I0921 22:26:01.196117  276511 kubeadm.go:317]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0921 22:26:01.196194  276511 kubeadm.go:317]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0921 22:26:01.196207  276511 kubeadm.go:317] 
	I0921 22:26:01.196286  276511 kubeadm.go:317] Alternatively, if you are the root user, you can run:
	I0921 22:26:01.196303  276511 kubeadm.go:317] 
	I0921 22:26:01.196379  276511 kubeadm.go:317]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0921 22:26:01.196404  276511 kubeadm.go:317] 
	I0921 22:26:01.196485  276511 kubeadm.go:317] You should now deploy a pod network to the cluster.
	I0921 22:26:01.196595  276511 kubeadm.go:317] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0921 22:26:01.196694  276511 kubeadm.go:317]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0921 22:26:01.196706  276511 kubeadm.go:317] 
	I0921 22:26:01.196820  276511 kubeadm.go:317] You can now join any number of control-plane nodes by copying certificate authorities
	I0921 22:26:01.196920  276511 kubeadm.go:317] and service account keys on each node and then running the following as root:
	I0921 22:26:01.196931  276511 kubeadm.go:317] 
	I0921 22:26:01.197032  276511 kubeadm.go:317]   kubeadm join control-plane.minikube.internal:8443 --token 9ldpwz.b05pw96cyce3l1nr \
	I0921 22:26:01.197181  276511 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:b419e756666ce2e30bdac31039f21ad7242d26e2b9c03a55c5bc6fe8dcfddcf7 \
	I0921 22:26:01.197220  276511 kubeadm.go:317] 	--control-plane 
	I0921 22:26:01.197231  276511 kubeadm.go:317] 
	I0921 22:26:01.197362  276511 kubeadm.go:317] Then you can join any number of worker nodes by running the following on each as root:
	I0921 22:26:01.197381  276511 kubeadm.go:317] 
	I0921 22:26:01.197495  276511 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8443 --token 9ldpwz.b05pw96cyce3l1nr \
	I0921 22:26:01.197628  276511 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:b419e756666ce2e30bdac31039f21ad7242d26e2b9c03a55c5bc6fe8dcfddcf7 
	I0921 22:26:01.197660  276511 cni.go:95] Creating CNI manager for ""
	I0921 22:26:01.197674  276511 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0921 22:26:01.199797  276511 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0921 22:25:59.039749  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:01.040507  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:01.201405  276511 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0921 22:26:01.205181  276511 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.2/kubectl ...
	I0921 22:26:01.205199  276511 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0921 22:26:01.218971  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0921 22:25:58.789397  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:26:00.789911  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:26:03.540344  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:06.039881  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:02.006490  276511 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0921 22:26:02.006560  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:02.006575  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl label nodes minikube.k8s.io/version=v1.27.0 minikube.k8s.io/commit=937c68716dfaac5b5ffa3b6655158d5d3472b8c4 minikube.k8s.io/name=no-preload-20220921220832-10174 minikube.k8s.io/updated_at=2022_09_21T22_26_02_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:02.013858  276511 ops.go:34] apiserver oom_adj: -16
	I0921 22:26:02.099832  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:02.694112  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:03.194089  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:03.693535  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:04.193854  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:04.693713  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:05.194101  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:05.694288  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:06.193619  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:06.693501  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:03.289345  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:26:05.789183  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:26:08.040230  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:10.539463  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:07.193590  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:07.693901  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:08.194072  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:08.694197  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:09.193914  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:09.693488  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:10.194416  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:10.693496  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:11.194435  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:11.694097  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:08.289258  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:26:10.789536  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:26:12.790035  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:26:12.194461  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:12.694279  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:13.193818  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:13.693711  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:13.758985  276511 kubeadm.go:1067] duration metric: took 11.752476269s to wait for elevateKubeSystemPrivileges.
	I0921 22:26:13.759013  276511 kubeadm.go:398] StartCluster complete in 4m35.198807914s
	I0921 22:26:13.759030  276511 settings.go:142] acquiring lock: {Name:mk2e017b0c75e33bad2b3a546d00214b38a21694 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:26:13.759144  276511 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
	I0921 22:26:13.760661  276511 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig: {Name:mkdec5faf1bb182b50a3cd5458b352f38075dc15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:26:14.276964  276511 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "no-preload-20220921220832-10174" rescaled to 1
	I0921 22:26:14.277021  276511 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0921 22:26:14.277060  276511 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0921 22:26:14.279846  276511 out.go:177] * Verifying Kubernetes components...
	I0921 22:26:14.277154  276511 addons.go:412] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0921 22:26:14.277306  276511 config.go:180] Loaded profile config "no-preload-20220921220832-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.2
	I0921 22:26:14.281313  276511 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0921 22:26:14.281349  276511 addons.go:65] Setting default-storageclass=true in profile "no-preload-20220921220832-10174"
	I0921 22:26:14.281359  276511 addons.go:65] Setting storage-provisioner=true in profile "no-preload-20220921220832-10174"
	I0921 22:26:14.281373  276511 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-20220921220832-10174"
	I0921 22:26:14.281387  276511 addons.go:65] Setting metrics-server=true in profile "no-preload-20220921220832-10174"
	I0921 22:26:14.281397  276511 addons.go:65] Setting dashboard=true in profile "no-preload-20220921220832-10174"
	I0921 22:26:14.281436  276511 addons.go:153] Setting addon dashboard=true in "no-preload-20220921220832-10174"
	W0921 22:26:14.281450  276511 addons.go:162] addon dashboard should already be in state true
	I0921 22:26:14.281497  276511 host.go:66] Checking if "no-preload-20220921220832-10174" exists ...
	I0921 22:26:14.281400  276511 addons.go:153] Setting addon metrics-server=true in "no-preload-20220921220832-10174"
	W0921 22:26:14.281576  276511 addons.go:162] addon metrics-server should already be in state true
	I0921 22:26:14.281377  276511 addons.go:153] Setting addon storage-provisioner=true in "no-preload-20220921220832-10174"
	I0921 22:26:14.281640  276511 host.go:66] Checking if "no-preload-20220921220832-10174" exists ...
	W0921 22:26:14.281653  276511 addons.go:162] addon storage-provisioner should already be in state true
	I0921 22:26:14.281684  276511 host.go:66] Checking if "no-preload-20220921220832-10174" exists ...
	I0921 22:26:14.281727  276511 cli_runner.go:164] Run: docker container inspect no-preload-20220921220832-10174 --format={{.State.Status}}
	I0921 22:26:14.282004  276511 cli_runner.go:164] Run: docker container inspect no-preload-20220921220832-10174 --format={{.State.Status}}
	I0921 22:26:14.282138  276511 cli_runner.go:164] Run: docker container inspect no-preload-20220921220832-10174 --format={{.State.Status}}
	I0921 22:26:14.282139  276511 cli_runner.go:164] Run: docker container inspect no-preload-20220921220832-10174 --format={{.State.Status}}
	I0921 22:26:14.321366  276511 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0921 22:26:14.323218  276511 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0921 22:26:14.323243  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0921 22:26:14.323321  276511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220832-10174
	I0921 22:26:14.323433  276511 addons.go:153] Setting addon default-storageclass=true in "no-preload-20220921220832-10174"
	W0921 22:26:14.323452  276511 addons.go:162] addon default-storageclass should already be in state true
	I0921 22:26:14.323478  276511 host.go:66] Checking if "no-preload-20220921220832-10174" exists ...
	I0921 22:26:14.323995  276511 cli_runner.go:164] Run: docker container inspect no-preload-20220921220832-10174 --format={{.State.Status}}
	I0921 22:26:14.331074  276511 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.6.0
	I0921 22:26:14.333243  276511 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0921 22:26:14.335670  276511 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0921 22:26:14.335699  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0921 22:26:14.335828  276511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220832-10174
	I0921 22:26:14.338700  276511 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0921 22:26:12.540251  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:15.040305  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:14.339971  276511 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0921 22:26:14.339996  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0921 22:26:14.340067  276511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220832-10174
	I0921 22:26:14.357088  276511 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0921 22:26:14.357118  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0921 22:26:14.357179  276511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220832-10174
	I0921 22:26:14.363845  276511 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49438 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/no-preload-20220921220832-10174/id_rsa Username:docker}
	I0921 22:26:14.373248  276511 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49438 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/no-preload-20220921220832-10174/id_rsa Username:docker}
	I0921 22:26:14.374001  276511 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49438 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/no-preload-20220921220832-10174/id_rsa Username:docker}
	I0921 22:26:14.403584  276511 node_ready.go:35] waiting up to 6m0s for node "no-preload-20220921220832-10174" to be "Ready" ...
	I0921 22:26:14.403673  276511 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0921 22:26:14.403710  276511 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49438 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/no-preload-20220921220832-10174/id_rsa Username:docker}
	I0921 22:26:14.597706  276511 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0921 22:26:14.597740  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0921 22:26:14.598185  276511 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0921 22:26:14.598208  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0921 22:26:14.678717  276511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0921 22:26:14.691157  276511 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0921 22:26:14.691190  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0921 22:26:14.776824  276511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0921 22:26:14.780103  276511 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0921 22:26:14.780131  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0921 22:26:14.796772  276511 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0921 22:26:14.796802  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0921 22:26:14.877240  276511 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0921 22:26:14.877270  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0921 22:26:14.886529  276511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0921 22:26:14.982072  276511 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0921 22:26:14.982106  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4206 bytes)
	I0921 22:26:15.083042  276511 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0921 22:26:15.083073  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0921 22:26:15.185025  276511 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0921 22:26:15.185058  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0921 22:26:15.288358  276511 start.go:810] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS
	I0921 22:26:15.295798  276511 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0921 22:26:15.295830  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0921 22:26:15.390667  276511 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0921 22:26:15.390693  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0921 22:26:15.415462  276511 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0921 22:26:15.415496  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0921 22:26:15.492343  276511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0921 22:26:15.887638  276511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.208874575s)
	I0921 22:26:15.887703  276511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.110843194s)
	I0921 22:26:15.982100  276511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.095511944s)
	I0921 22:26:15.982142  276511 addons.go:383] Verifying addon metrics-server=true in "no-preload-20220921220832-10174"
	I0921 22:26:16.410487  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:16.706261  276511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.213866962s)
	I0921 22:26:16.708800  276511 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0921 22:26:16.709899  276511 addons.go:414] enableAddons completed in 2.432760887s
	I0921 22:26:15.290491  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:26:17.789818  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:26:17.539620  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:20.039549  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:18.911099  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:21.409684  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:20.289226  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:26:22.292776  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:26:22.292799  265259 node_ready.go:38] duration metric: took 4m0.017444735s waiting for node "embed-certs-20220921220439-10174" to be "Ready" ...
	I0921 22:26:22.294631  265259 out.go:177] 
	W0921 22:26:22.296115  265259 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0921 22:26:22.296143  265259 out.go:239] * 
	W0921 22:26:22.296927  265259 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0921 22:26:22.298511  265259 out.go:177] 
	I0921 22:26:22.539641  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:25.039622  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:23.410505  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:25.909606  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:27.539385  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:29.539878  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:31.540249  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:27.910578  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:30.410429  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:33.540339  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:35.541025  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:32.910296  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:34.911081  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:38.039663  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:40.539522  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:37.410360  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:39.410436  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:42.540000  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:45.040231  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:41.909862  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:43.910310  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:46.409644  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:47.540283  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:50.039510  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:48.410566  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:50.410732  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:52.039949  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:54.540144  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:52.910395  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:54.910495  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:57.039966  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:59.040209  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:01.539473  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:57.409907  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:59.410288  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:03.540044  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:06.040183  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:01.910153  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:04.409817  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:06.410562  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:08.040423  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:10.539873  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:08.910302  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:11.410571  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:13.039961  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:15.040246  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:13.909964  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:15.910369  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:17.539604  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:19.539765  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:18.410585  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:20.910125  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:22.040021  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:24.539835  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:26.540240  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:22.910441  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:25.410069  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:28.540555  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:31.039426  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:27.410438  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:29.410512  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:33.040327  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:35.040601  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:31.910290  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:34.409802  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:37.540256  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:40.039584  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:36.909982  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:39.409679  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:41.410245  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:42.539492  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:44.539613  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:46.540433  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:43.909863  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:45.910696  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:49.039750  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:51.040314  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:48.410147  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:50.410237  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:53.040407  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:55.540422  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:52.910535  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:55.410601  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:58.040486  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:28:00.540148  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:57.910322  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:59.910846  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:03.039402  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:28:05.040045  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:28:02.410370  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:04.410513  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:07.040112  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:28:09.539484  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:28:11.539916  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:28:06.910328  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:09.409926  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:11.410618  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:14.040357  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:28:16.040410  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:28:13.909830  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:15.910746  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:18.539390  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:28:20.539944  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:28:18.409773  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:20.410208  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:22.540064  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:28:25.039880  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:28:22.410702  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:24.909931  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:27.539325  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:28:29.540282  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:28:30.037464  283599 pod_ready.go:81] duration metric: took 4m0.003103432s waiting for pod "coredns-565d847f94-mrkjn" in "kube-system" namespace to be "Ready" ...
	E0921 22:28:30.037491  283599 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-565d847f94-mrkjn" in "kube-system" namespace to be "Ready" (will not retry!)
	I0921 22:28:30.037512  283599 pod_ready.go:38] duration metric: took 4m0.007931264s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0921 22:28:30.037542  283599 kubeadm.go:631] restartCluster took 4m11.484871611s
	W0921 22:28:30.037694  283599 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0921 22:28:30.037731  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0921 22:28:26.910183  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:28.910722  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:31.410255  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:32.836415  283599 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (2.798662315s)
	I0921 22:28:32.836470  283599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0921 22:28:32.846281  283599 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0921 22:28:32.853286  283599 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0921 22:28:32.853347  283599 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0921 22:28:32.860321  283599 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0921 22:28:32.860372  283599 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0921 22:28:32.899444  283599 kubeadm.go:317] [init] Using Kubernetes version: v1.25.2
	I0921 22:28:32.899530  283599 kubeadm.go:317] [preflight] Running pre-flight checks
	I0921 22:28:32.927597  283599 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I0921 22:28:32.927684  283599 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1017-gcp
	I0921 22:28:32.927762  283599 kubeadm.go:317] OS: Linux
	I0921 22:28:32.927817  283599 kubeadm.go:317] CGROUPS_CPU: enabled
	I0921 22:28:32.927857  283599 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I0921 22:28:32.927895  283599 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I0921 22:28:32.927957  283599 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I0921 22:28:32.928004  283599 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I0921 22:28:32.928045  283599 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I0921 22:28:32.928083  283599 kubeadm.go:317] CGROUPS_PIDS: enabled
	I0921 22:28:32.928121  283599 kubeadm.go:317] CGROUPS_HUGETLB: enabled
	I0921 22:28:32.928158  283599 kubeadm.go:317] CGROUPS_BLKIO: enabled
	I0921 22:28:32.994267  283599 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0921 22:28:32.994393  283599 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0921 22:28:32.994471  283599 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0921 22:28:33.113433  283599 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0921 22:28:33.118993  283599 out.go:204]   - Generating certificates and keys ...
	I0921 22:28:33.119145  283599 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0921 22:28:33.119247  283599 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0921 22:28:33.119310  283599 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0921 22:28:33.119362  283599 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I0921 22:28:33.119432  283599 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I0921 22:28:33.119501  283599 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I0921 22:28:33.119554  283599 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I0921 22:28:33.119605  283599 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I0921 22:28:33.119666  283599 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0921 22:28:33.119759  283599 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0921 22:28:33.119797  283599 kubeadm.go:317] [certs] Using the existing "sa" key
	I0921 22:28:33.119873  283599 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0921 22:28:33.240892  283599 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0921 22:28:33.319256  283599 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0921 22:28:33.514290  283599 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0921 22:28:33.579294  283599 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0921 22:28:33.591185  283599 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0921 22:28:33.591951  283599 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0921 22:28:33.592077  283599 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I0921 22:28:33.671909  283599 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0921 22:28:33.674209  283599 out.go:204]   - Booting up control plane ...
	I0921 22:28:33.674356  283599 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0921 22:28:33.674478  283599 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0921 22:28:33.675328  283599 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0921 22:28:33.677339  283599 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0921 22:28:33.679453  283599 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0921 22:28:33.410335  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:35.410708  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:40.182528  283599 kubeadm.go:317] [apiclient] All control plane components are healthy after 6.502979 seconds
	I0921 22:28:40.182719  283599 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0921 22:28:40.191775  283599 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0921 22:28:40.708308  283599 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs
	I0921 22:28:40.708506  283599 kubeadm.go:317] [mark-control-plane] Marking the node default-k8s-different-port-20220921221118-10174 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0921 22:28:41.216221  283599 kubeadm.go:317] [bootstrap-token] Using token: 7zktge.i7kw817sdpmpqput
	I0921 22:28:41.217917  283599 out.go:204]   - Configuring RBAC rules ...
	I0921 22:28:41.218062  283599 kubeadm.go:317] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0921 22:28:41.221048  283599 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0921 22:28:41.225663  283599 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0921 22:28:41.227873  283599 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0921 22:28:41.229840  283599 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0921 22:28:41.231693  283599 kubeadm.go:317] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0921 22:28:41.238509  283599 kubeadm.go:317] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0921 22:28:41.448788  283599 kubeadm.go:317] [addons] Applied essential addon: CoreDNS
	I0921 22:28:41.684596  283599 kubeadm.go:317] [addons] Applied essential addon: kube-proxy
	I0921 22:28:41.686021  283599 kubeadm.go:317] 
	I0921 22:28:41.686112  283599 kubeadm.go:317] Your Kubernetes control-plane has initialized successfully!
	I0921 22:28:41.686121  283599 kubeadm.go:317] 
	I0921 22:28:41.686213  283599 kubeadm.go:317] To start using your cluster, you need to run the following as a regular user:
	I0921 22:28:41.686221  283599 kubeadm.go:317] 
	I0921 22:28:41.686253  283599 kubeadm.go:317]   mkdir -p $HOME/.kube
	I0921 22:28:41.687200  283599 kubeadm.go:317]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0921 22:28:41.687275  283599 kubeadm.go:317]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0921 22:28:41.687282  283599 kubeadm.go:317] 
	I0921 22:28:41.687347  283599 kubeadm.go:317] Alternatively, if you are the root user, you can run:
	I0921 22:28:41.687361  283599 kubeadm.go:317] 
	I0921 22:28:41.687420  283599 kubeadm.go:317]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0921 22:28:41.687443  283599 kubeadm.go:317] 
	I0921 22:28:41.687516  283599 kubeadm.go:317] You should now deploy a pod network to the cluster.
	I0921 22:28:41.687626  283599 kubeadm.go:317] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0921 22:28:41.687754  283599 kubeadm.go:317]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0921 22:28:41.687768  283599 kubeadm.go:317] 
	I0921 22:28:41.687856  283599 kubeadm.go:317] You can now join any number of control-plane nodes by copying certificate authorities
	I0921 22:28:41.687945  283599 kubeadm.go:317] and service account keys on each node and then running the following as root:
	I0921 22:28:41.687952  283599 kubeadm.go:317] 
	I0921 22:28:41.688054  283599 kubeadm.go:317]   kubeadm join control-plane.minikube.internal:8444 --token 7zktge.i7kw817sdpmpqput \
	I0921 22:28:41.688176  283599 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:b419e756666ce2e30bdac31039f21ad7242d26e2b9c03a55c5bc6fe8dcfddcf7 \
	I0921 22:28:41.688202  283599 kubeadm.go:317] 	--control-plane 
	I0921 22:28:41.688207  283599 kubeadm.go:317] 
	I0921 22:28:41.688304  283599 kubeadm.go:317] Then you can join any number of worker nodes by running the following on each as root:
	I0921 22:28:41.688309  283599 kubeadm.go:317] 
	I0921 22:28:41.688403  283599 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8444 --token 7zktge.i7kw817sdpmpqput \
	I0921 22:28:41.688525  283599 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:b419e756666ce2e30bdac31039f21ad7242d26e2b9c03a55c5bc6fe8dcfddcf7 
	I0921 22:28:41.691473  283599 kubeadm.go:317] W0921 22:28:32.894416    3309 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0921 22:28:41.691806  283599 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1017-gcp\n", err: exit status 1
	I0921 22:28:41.691944  283599 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0921 22:28:41.691973  283599 cni.go:95] Creating CNI manager for ""
	I0921 22:28:41.691983  283599 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0921 22:28:41.694185  283599 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0921 22:28:37.910661  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:40.410644  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:41.695783  283599 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0921 22:28:41.699760  283599 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.2/kubectl ...
	I0921 22:28:41.699784  283599 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0921 22:28:41.776183  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0921 22:28:42.446104  283599 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0921 22:28:42.446180  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:42.446216  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl label nodes minikube.k8s.io/version=v1.27.0 minikube.k8s.io/commit=937c68716dfaac5b5ffa3b6655158d5d3472b8c4 minikube.k8s.io/name=default-k8s-different-port-20220921221118-10174 minikube.k8s.io/updated_at=2022_09_21T22_28_42_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:42.524814  283599 ops.go:34] apiserver oom_adj: -16
	I0921 22:28:42.524918  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:43.099884  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:43.600017  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:44.099303  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:44.599933  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:45.100173  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:45.599961  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:46.099843  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:46.599840  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:42.910093  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:44.910463  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:47.099465  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:47.599512  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:48.099998  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:48.599598  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:49.099840  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:49.599433  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:50.099931  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:50.599355  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:51.099363  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:51.599865  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:47.410019  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:49.410428  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:51.410461  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:52.099400  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:52.600056  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:53.100255  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:53.599772  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:53.668975  283599 kubeadm.go:1067] duration metric: took 11.222848116s to wait for elevateKubeSystemPrivileges.
	I0921 22:28:53.669016  283599 kubeadm.go:398] StartCluster complete in 4m35.159165946s
	I0921 22:28:53.669039  283599 settings.go:142] acquiring lock: {Name:mk2e017b0c75e33bad2b3a546d00214b38a21694 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:28:53.669157  283599 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
	I0921 22:28:53.670820  283599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig: {Name:mkdec5faf1bb182b50a3cd5458b352f38075dc15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:28:54.187769  283599 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20220921221118-10174" rescaled to 1
	I0921 22:28:54.187839  283599 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0921 22:28:54.187870  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0921 22:28:54.187894  283599 addons.go:412] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0921 22:28:54.190631  283599 out.go:177] * Verifying Kubernetes components...
	I0921 22:28:54.187957  283599 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-different-port-20220921221118-10174"
	I0921 22:28:54.187964  283599 addons.go:65] Setting dashboard=true in profile "default-k8s-different-port-20220921221118-10174"
	I0921 22:28:54.187970  283599 addons.go:65] Setting default-storageclass=true in profile "default-k8s-different-port-20220921221118-10174"
	I0921 22:28:54.188002  283599 addons.go:65] Setting metrics-server=true in profile "default-k8s-different-port-20220921221118-10174"
	I0921 22:28:54.188076  283599 config.go:180] Loaded profile config "default-k8s-different-port-20220921221118-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.2
	I0921 22:28:54.192035  283599 addons.go:153] Setting addon storage-provisioner=true in "default-k8s-different-port-20220921221118-10174"
	W0921 22:28:54.192079  283599 addons.go:162] addon storage-provisioner should already be in state true
	I0921 22:28:54.192091  283599 addons.go:153] Setting addon dashboard=true in "default-k8s-different-port-20220921221118-10174"
	W0921 22:28:54.192114  283599 addons.go:162] addon dashboard should already be in state true
	I0921 22:28:54.192162  283599 host.go:66] Checking if "default-k8s-different-port-20220921221118-10174" exists ...
	I0921 22:28:54.192210  283599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0921 22:28:54.192299  283599 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20220921221118-10174"
	I0921 22:28:54.192580  283599 addons.go:153] Setting addon metrics-server=true in "default-k8s-different-port-20220921221118-10174"
	W0921 22:28:54.192616  283599 addons.go:162] addon metrics-server should already be in state true
	I0921 22:28:54.192633  283599 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221118-10174 --format={{.State.Status}}
	I0921 22:28:54.192666  283599 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221118-10174 --format={{.State.Status}}
	I0921 22:28:54.192163  283599 host.go:66] Checking if "default-k8s-different-port-20220921221118-10174" exists ...
	I0921 22:28:54.192666  283599 host.go:66] Checking if "default-k8s-different-port-20220921221118-10174" exists ...
	I0921 22:28:54.193362  283599 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221118-10174 --format={{.State.Status}}
	I0921 22:28:54.193439  283599 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221118-10174 --format={{.State.Status}}
	I0921 22:28:54.234974  283599 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0921 22:28:54.236667  283599 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0921 22:28:54.236692  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0921 22:28:54.236745  283599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:28:54.240000  283599 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.6.0
	I0921 22:28:54.239390  283599 addons.go:153] Setting addon default-storageclass=true in "default-k8s-different-port-20220921221118-10174"
	I0921 22:28:54.241874  283599 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0921 22:28:54.244335  283599 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0921 22:28:54.244363  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	W0921 22:28:54.241874  283599 addons.go:162] addon default-storageclass should already be in state true
	I0921 22:28:54.244424  283599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:28:54.244454  283599 host.go:66] Checking if "default-k8s-different-port-20220921221118-10174" exists ...
	I0921 22:28:54.244956  283599 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221118-10174 --format={{.State.Status}}
	I0921 22:28:54.246658  283599 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0921 22:28:54.248082  283599 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0921 22:28:54.248109  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0921 22:28:54.248165  283599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:28:54.272909  283599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49443 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa Username:docker}
	I0921 22:28:54.273873  283599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49443 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa Username:docker}
	I0921 22:28:54.277163  283599 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0921 22:28:54.277186  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0921 22:28:54.277236  283599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:28:54.290041  283599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49443 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa Username:docker}
	I0921 22:28:54.318706  283599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49443 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa Username:docker}
	I0921 22:28:54.398932  283599 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20220921221118-10174" to be "Ready" ...
	I0921 22:28:54.399014  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0921 22:28:54.496523  283599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0921 22:28:54.498431  283599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0921 22:28:54.499591  283599 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0921 22:28:54.499650  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0921 22:28:54.501640  283599 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0921 22:28:54.501663  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0921 22:28:54.594519  283599 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0921 22:28:54.594561  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0921 22:28:54.596768  283599 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0921 22:28:54.596847  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0921 22:28:54.690036  283599 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0921 22:28:54.690071  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0921 22:28:54.700119  283599 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0921 22:28:54.700197  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0921 22:28:54.876320  283599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0921 22:28:54.883544  283599 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0921 22:28:54.883571  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4206 bytes)
	I0921 22:28:54.977006  283599 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0921 22:28:54.977040  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0921 22:28:55.079240  283599 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0921 22:28:55.079273  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0921 22:28:55.176309  283599 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0921 22:28:55.176344  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0921 22:28:55.276282  283599 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0921 22:28:55.276317  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0921 22:28:55.379016  283599 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0921 22:28:55.379044  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0921 22:28:55.386242  283599 start.go:810] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS
	I0921 22:28:55.399129  283599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0921 22:28:55.595061  283599 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.098450009s)
	I0921 22:28:55.786581  283599 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.288109437s)
	I0921 22:28:56.081753  283599 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.205376891s)
	I0921 22:28:56.081804  283599 addons.go:383] Verifying addon metrics-server=true in "default-k8s-different-port-20220921221118-10174"
	I0921 22:28:56.387178  283599 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0921 22:28:56.388690  283599 addons.go:414] enableAddons completed in 2.200797183s
	I0921 22:28:56.404853  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:28:53.909716  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:55.910611  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:58.405031  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:00.405582  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:28:58.409630  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:00.410447  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:02.905572  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:05.405473  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:02.910338  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:05.410066  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:07.904364  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:09.905589  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:07.910279  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:10.410127  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:12.405034  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:14.905741  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:12.910452  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:15.410553  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:17.404952  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:19.405175  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:21.405392  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:17.910479  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:20.410559  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:23.405620  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:25.905592  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:22.909898  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:24.910567  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:27.905775  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:30.405483  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:27.410039  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:29.410131  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:31.410291  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:32.904863  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:35.404709  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:33.910459  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:36.410445  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:37.905690  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:40.405229  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:38.910532  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:41.409671  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:42.905360  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:44.905907  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:43.410422  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:45.910511  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:47.404631  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:49.405402  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:48.409951  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:50.410363  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:51.904997  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:53.905667  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:56.405228  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:52.411261  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:54.910318  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:58.405705  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:00.905348  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:57.409683  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:59.410335  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:30:03.404779  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:05.404833  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:01.909994  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:30:03.910230  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:30:06.410036  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:30:07.405804  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:09.904912  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:08.909550  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:30:10.910475  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:30:13.409889  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:30:14.413229  276511 node_ready.go:38] duration metric: took 4m0.009606009s waiting for node "no-preload-20220921220832-10174" to be "Ready" ...
	I0921 22:30:14.416209  276511 out.go:177] 
	W0921 22:30:14.417896  276511 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0921 22:30:14.417916  276511 out.go:239] * 
	W0921 22:30:14.418711  276511 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0921 22:30:14.420798  276511 out.go:177] 
	I0921 22:30:11.905117  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:13.905422  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:15.906020  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:18.404644  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:20.404682  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:22.405540  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:24.905233  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:27.404679  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:29.904692  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:31.905266  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:34.405088  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:36.405476  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:38.904414  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:40.905386  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:43.404507  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:45.405356  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:47.904571  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:50.405311  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:52.904564  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:54.905119  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:57.405076  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:59.405121  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:01.904816  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:03.905408  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:05.905565  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:08.404718  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:10.405173  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:12.905041  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:14.905498  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:17.405656  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:19.905667  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:22.405514  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:24.904738  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:27.404689  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:29.405353  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:31.904926  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:34.405471  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:36.905606  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:39.404550  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:41.405513  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:43.905655  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:46.405308  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:48.405699  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:50.905270  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:53.405205  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:55.405540  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:57.905798  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:00.405370  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:02.405480  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:04.904649  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:06.905338  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:09.404845  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:11.405472  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:13.905469  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:16.405211  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:18.405365  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:20.904698  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:23.405458  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:25.905299  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:27.905466  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:29.905633  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:32.404583  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:34.404795  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:36.405323  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:38.405395  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:40.904581  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:42.905533  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:45.405100  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:47.405337  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:49.405417  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:51.905042  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:54.404654  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:54.406831  283599 node_ready.go:38] duration metric: took 4m0.00786279s waiting for node "default-k8s-different-port-20220921221118-10174" to be "Ready" ...
	I0921 22:32:54.409456  283599 out.go:177] 
	W0921 22:32:54.411031  283599 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0921 22:32:54.411055  283599 out.go:239] * 
	W0921 22:32:54.411890  283599 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0921 22:32:54.413449  283599 out.go:177] 
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	c27de2fb73a29       d921cee849482       48 seconds ago      Running             kindnet-cni               4                   c154261f9ef9c
	2fe83f98a315c       d921cee849482       4 minutes ago       Exited              kindnet-cni               3                   c154261f9ef9c
	8c26b3ec700f1       1c7d8c51823b5       13 minutes ago      Running             kube-proxy                0                   b9860d4aa1834
	a520a5b3d71d5       ca0ea1ee3cfd3       13 minutes ago      Running             kube-scheduler            2                   5d1b185924c31
	54979eccafeb5       a8a176a5d5d69       13 minutes ago      Running             etcd                      2                   3aeccdb1ccfbb
	fc632c61d18ce       dbfceb93c69b6       13 minutes ago      Running             kube-controller-manager   2                   6c963e60ffdaf
	fbe07ea9b6cd1       97801f8394908       13 minutes ago      Running             kube-apiserver            2                   0e8d68b117ca3
	
	* 
	* ==> containerd <==
	* -- Logs begin at Wed 2022-09-21 22:21:22 UTC, end at Wed 2022-09-21 22:39:17 UTC. --
	Sep 21 22:31:37 no-preload-20220921220832-10174 containerd[386]: time="2022-09-21T22:31:37.759166406Z" level=info msg="RemoveContainer for \"f637568210c7a2384ec1a6884bfbe8891208a46727d367fc9ddbb657dc488b1d\" returns successfully"
	Sep 21 22:31:53 no-preload-20220921220832-10174 containerd[386]: time="2022-09-21T22:31:53.109939059Z" level=info msg="CreateContainer within sandbox \"c154261f9ef9c866e85b894622699d9ca50bb1216088c5cd9e748fa81bbc51de\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:2,}"
	Sep 21 22:31:53 no-preload-20220921220832-10174 containerd[386]: time="2022-09-21T22:31:53.122466658Z" level=info msg="CreateContainer within sandbox \"c154261f9ef9c866e85b894622699d9ca50bb1216088c5cd9e748fa81bbc51de\" for &ContainerMetadata{Name:kindnet-cni,Attempt:2,} returns container id \"812223373592633117ae166606f2b7bd75732b1a66d9c1637bf517d87c604bfe\""
	Sep 21 22:31:53 no-preload-20220921220832-10174 containerd[386]: time="2022-09-21T22:31:53.123029004Z" level=info msg="StartContainer for \"812223373592633117ae166606f2b7bd75732b1a66d9c1637bf517d87c604bfe\""
	Sep 21 22:31:53 no-preload-20220921220832-10174 containerd[386]: time="2022-09-21T22:31:53.291881703Z" level=info msg="StartContainer for \"812223373592633117ae166606f2b7bd75732b1a66d9c1637bf517d87c604bfe\" returns successfully"
	Sep 21 22:34:33 no-preload-20220921220832-10174 containerd[386]: time="2022-09-21T22:34:33.724717036Z" level=info msg="shim disconnected" id=812223373592633117ae166606f2b7bd75732b1a66d9c1637bf517d87c604bfe
	Sep 21 22:34:33 no-preload-20220921220832-10174 containerd[386]: time="2022-09-21T22:34:33.724789841Z" level=warning msg="cleaning up after shim disconnected" id=812223373592633117ae166606f2b7bd75732b1a66d9c1637bf517d87c604bfe namespace=k8s.io
	Sep 21 22:34:33 no-preload-20220921220832-10174 containerd[386]: time="2022-09-21T22:34:33.724802910Z" level=info msg="cleaning up dead shim"
	Sep 21 22:34:33 no-preload-20220921220832-10174 containerd[386]: time="2022-09-21T22:34:33.734451754Z" level=warning msg="cleanup warnings time=\"2022-09-21T22:34:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5245 runtime=io.containerd.runc.v2\n"
	Sep 21 22:34:34 no-preload-20220921220832-10174 containerd[386]: time="2022-09-21T22:34:34.077071474Z" level=info msg="RemoveContainer for \"fab8999ce76feeeff063c9d2ac345193f7ab2fc3e8c6e8111eb98766d74ff485\""
	Sep 21 22:34:34 no-preload-20220921220832-10174 containerd[386]: time="2022-09-21T22:34:34.082003004Z" level=info msg="RemoveContainer for \"fab8999ce76feeeff063c9d2ac345193f7ab2fc3e8c6e8111eb98766d74ff485\" returns successfully"
	Sep 21 22:34:59 no-preload-20220921220832-10174 containerd[386]: time="2022-09-21T22:34:59.109394602Z" level=info msg="CreateContainer within sandbox \"c154261f9ef9c866e85b894622699d9ca50bb1216088c5cd9e748fa81bbc51de\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:3,}"
	Sep 21 22:34:59 no-preload-20220921220832-10174 containerd[386]: time="2022-09-21T22:34:59.122464733Z" level=info msg="CreateContainer within sandbox \"c154261f9ef9c866e85b894622699d9ca50bb1216088c5cd9e748fa81bbc51de\" for &ContainerMetadata{Name:kindnet-cni,Attempt:3,} returns container id \"2fe83f98a315c181347bc939a3196e2a9f4728c838042322e1ed89d8cbfcb61f\""
	Sep 21 22:34:59 no-preload-20220921220832-10174 containerd[386]: time="2022-09-21T22:34:59.122897381Z" level=info msg="StartContainer for \"2fe83f98a315c181347bc939a3196e2a9f4728c838042322e1ed89d8cbfcb61f\""
	Sep 21 22:34:59 no-preload-20220921220832-10174 containerd[386]: time="2022-09-21T22:34:59.197591353Z" level=info msg="StartContainer for \"2fe83f98a315c181347bc939a3196e2a9f4728c838042322e1ed89d8cbfcb61f\" returns successfully"
	Sep 21 22:37:39 no-preload-20220921220832-10174 containerd[386]: time="2022-09-21T22:37:39.729236593Z" level=info msg="shim disconnected" id=2fe83f98a315c181347bc939a3196e2a9f4728c838042322e1ed89d8cbfcb61f
	Sep 21 22:37:39 no-preload-20220921220832-10174 containerd[386]: time="2022-09-21T22:37:39.729306585Z" level=warning msg="cleaning up after shim disconnected" id=2fe83f98a315c181347bc939a3196e2a9f4728c838042322e1ed89d8cbfcb61f namespace=k8s.io
	Sep 21 22:37:39 no-preload-20220921220832-10174 containerd[386]: time="2022-09-21T22:37:39.729320870Z" level=info msg="cleaning up dead shim"
	Sep 21 22:37:39 no-preload-20220921220832-10174 containerd[386]: time="2022-09-21T22:37:39.739090038Z" level=warning msg="cleanup warnings time=\"2022-09-21T22:37:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5363 runtime=io.containerd.runc.v2\n"
	Sep 21 22:37:40 no-preload-20220921220832-10174 containerd[386]: time="2022-09-21T22:37:40.410832397Z" level=info msg="RemoveContainer for \"812223373592633117ae166606f2b7bd75732b1a66d9c1637bf517d87c604bfe\""
	Sep 21 22:37:40 no-preload-20220921220832-10174 containerd[386]: time="2022-09-21T22:37:40.415518884Z" level=info msg="RemoveContainer for \"812223373592633117ae166606f2b7bd75732b1a66d9c1637bf517d87c604bfe\" returns successfully"
	Sep 21 22:38:29 no-preload-20220921220832-10174 containerd[386]: time="2022-09-21T22:38:29.108391211Z" level=info msg="CreateContainer within sandbox \"c154261f9ef9c866e85b894622699d9ca50bb1216088c5cd9e748fa81bbc51de\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:4,}"
	Sep 21 22:38:29 no-preload-20220921220832-10174 containerd[386]: time="2022-09-21T22:38:29.120516419Z" level=info msg="CreateContainer within sandbox \"c154261f9ef9c866e85b894622699d9ca50bb1216088c5cd9e748fa81bbc51de\" for &ContainerMetadata{Name:kindnet-cni,Attempt:4,} returns container id \"c27de2fb73a29c8216e47ccdec808b262eebeed97fe37eda3d305b2fe2e15606\""
	Sep 21 22:38:29 no-preload-20220921220832-10174 containerd[386]: time="2022-09-21T22:38:29.121017601Z" level=info msg="StartContainer for \"c27de2fb73a29c8216e47ccdec808b262eebeed97fe37eda3d305b2fe2e15606\""
	Sep 21 22:38:29 no-preload-20220921220832-10174 containerd[386]: time="2022-09-21T22:38:29.280673442Z" level=info msg="StartContainer for \"c27de2fb73a29c8216e47ccdec808b262eebeed97fe37eda3d305b2fe2e15606\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-20220921220832-10174
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-20220921220832-10174
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=937c68716dfaac5b5ffa3b6655158d5d3472b8c4
	                    minikube.k8s.io/name=no-preload-20220921220832-10174
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_09_21T22_26_02_0700
	                    minikube.k8s.io/version=v1.27.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 21 Sep 2022 22:25:58 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-20220921220832-10174
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 21 Sep 2022 22:39:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 21 Sep 2022 22:36:23 +0000   Wed, 21 Sep 2022 22:25:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 21 Sep 2022 22:36:23 +0000   Wed, 21 Sep 2022 22:25:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 21 Sep 2022 22:36:23 +0000   Wed, 21 Sep 2022 22:25:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 21 Sep 2022 22:36:23 +0000   Wed, 21 Sep 2022 22:25:55 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-20220921220832-10174
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871748Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871748Ki
	  pods:               110
	System Info:
	  Machine ID:                 40026c506a9948afaae3b0fda0a61c83
	  System UUID:                44c6c62a-5061-4f07-a2f0-9d563da1b73e
	  Boot ID:                    878f3e16-9143-42f3-b848-1a740c7f26bf
	  Kernel Version:             5.15.0-1017-gcp
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.6.8
	  Kubelet Version:            v1.25.2
	  Kube-Proxy Version:         v1.25.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-no-preload-20220921220832-10174                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-ww9rl                                              100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      13m
	  kube-system                 kube-apiserver-no-preload-20220921220832-10174             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-no-preload-20220921220832-10174    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-52x7l                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-no-preload-20220921220832-10174             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientMemory  13m (x4 over 13m)  kubelet          Node no-preload-20220921220832-10174 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x4 over 13m)  kubelet          Node no-preload-20220921220832-10174 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x4 over 13m)  kubelet          Node no-preload-20220921220832-10174 status is now: NodeHasSufficientPID
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m                kubelet          Node no-preload-20220921220832-10174 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m                kubelet          Node no-preload-20220921220832-10174 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m                kubelet          Node no-preload-20220921220832-10174 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node no-preload-20220921220832-10174 event: Registered Node no-preload-20220921220832-10174 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000005] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +2.959863] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.003881] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.023897] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[Sep21 22:10] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.005087] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[Sep21 22:11] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +2.967845] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.031851] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.027935] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +2.943864] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.019893] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.023889] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	
	* 
	* ==> etcd [54979eccafeb56940caff5e4877cc59e8d00548c625c65f2549da307ec829506] <==
	* {"level":"info","ts":"2022-09-21T22:25:55.087Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-09-21T22:25:55.087Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2022-09-21T22:25:55.087Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2022-09-21T22:25:55.087Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"dfc97eb0aae75b33","initial-advertise-peer-urls":["https://192.168.94.2:2380"],"listen-peer-urls":["https://192.168.94.2:2380"],"advertise-client-urls":["https://192.168.94.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.94.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-09-21T22:25:55.087Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-09-21T22:25:55.576Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 is starting a new election at term 1"}
	{"level":"info","ts":"2022-09-21T22:25:55.576Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-09-21T22:25:55.576Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgPreVoteResp from dfc97eb0aae75b33 at term 1"}
	{"level":"info","ts":"2022-09-21T22:25:55.576Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became candidate at term 2"}
	{"level":"info","ts":"2022-09-21T22:25:55.576Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgVoteResp from dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2022-09-21T22:25:55.576Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became leader at term 2"}
	{"level":"info","ts":"2022-09-21T22:25:55.576Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2022-09-21T22:25:55.577Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-21T22:25:55.578Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-21T22:25:55.578Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-21T22:25:55.578Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-21T22:25:55.578Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:no-preload-20220921220832-10174 ClientURLs:[https://192.168.94.2:2379]}","request-path":"/0/members/dfc97eb0aae75b33/attributes","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2022-09-21T22:25:55.578Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-09-21T22:25:55.578Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-09-21T22:25:55.578Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-09-21T22:25:55.578Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-09-21T22:25:55.580Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-09-21T22:25:55.581Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.94.2:2379"}
	{"level":"info","ts":"2022-09-21T22:35:55.696Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":528}
	{"level":"info","ts":"2022-09-21T22:35:55.697Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":528,"took":"423.505µs"}
	
	* 
	* ==> kernel <==
	*  22:39:17 up  1:21,  0 users,  load average: 0.22, 0.29, 0.81
	Linux no-preload-20220921220832-10174 5.15.0-1017-gcp #23~20.04.2-Ubuntu SMP Wed Aug 17 02:46:40 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kube-apiserver [fbe07ea9b6cd1b2387645030cac1d4cc68659f594af25721d8138cd4ce88e0cc] <==
	* W0921 22:33:59.186903       1 handler_proxy.go:105] no RequestInfo found in the context
	E0921 22:33:59.186972       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0921 22:33:59.186983       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0921 22:35:59.190220       1 handler_proxy.go:105] no RequestInfo found in the context
	E0921 22:35:59.190294       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0921 22:35:59.190301       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0921 22:35:59.190319       1 handler_proxy.go:105] no RequestInfo found in the context
	E0921 22:35:59.190350       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0921 22:35:59.191514       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0921 22:36:59.191309       1 handler_proxy.go:105] no RequestInfo found in the context
	E0921 22:36:59.191385       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0921 22:36:59.191397       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0921 22:36:59.192390       1 handler_proxy.go:105] no RequestInfo found in the context
	E0921 22:36:59.192417       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0921 22:36:59.192433       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0921 22:38:59.191798       1 handler_proxy.go:105] no RequestInfo found in the context
	E0921 22:38:59.191882       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0921 22:38:59.191894       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0921 22:38:59.192964       1 handler_proxy.go:105] no RequestInfo found in the context
	E0921 22:38:59.192997       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0921 22:38:59.193005       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [fc632c61d18cec99e31615712601080a0a8d73d2a421dd3fb061f64331bf7d7c] <==
	* W0921 22:33:13.948441       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0921 22:33:43.530631       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0921 22:33:43.958197       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0921 22:34:13.536193       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0921 22:34:13.969958       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0921 22:34:43.542536       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0921 22:34:43.980315       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0921 22:35:13.549787       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0921 22:35:13.991209       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0921 22:35:43.556035       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0921 22:35:44.000068       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0921 22:36:13.562259       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0921 22:36:14.010249       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0921 22:36:43.568335       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0921 22:36:44.020868       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0921 22:37:13.574419       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0921 22:37:14.031623       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0921 22:37:43.580590       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0921 22:37:44.042416       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0921 22:38:13.588232       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0921 22:38:14.053638       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0921 22:38:43.594558       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0921 22:38:44.064645       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0921 22:39:13.600905       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0921 22:39:14.074723       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [8c26b3ec700f1f2e31061bc3b5571524698489a128551f988929d8f40c0cd123] <==
	* I0921 22:26:14.785017       1 node.go:163] Successfully retrieved node IP: 192.168.94.2
	I0921 22:26:14.785128       1 server_others.go:138] "Detected node IP" address="192.168.94.2"
	I0921 22:26:14.785168       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0921 22:26:14.888322       1 server_others.go:206] "Using iptables Proxier"
	I0921 22:26:14.888381       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0921 22:26:14.888396       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0921 22:26:14.888420       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0921 22:26:14.888469       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0921 22:26:14.888613       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0921 22:26:14.888846       1 server.go:661] "Version info" version="v1.25.2"
	I0921 22:26:14.888858       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0921 22:26:14.889551       1 config.go:444] "Starting node config controller"
	I0921 22:26:14.889563       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0921 22:26:14.889861       1 config.go:317] "Starting service config controller"
	I0921 22:26:14.889874       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0921 22:26:14.889897       1 config.go:226] "Starting endpoint slice config controller"
	I0921 22:26:14.889901       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0921 22:26:14.989950       1 shared_informer.go:262] Caches are synced for node config
	I0921 22:26:14.990008       1 shared_informer.go:262] Caches are synced for service config
	I0921 22:26:14.990012       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [a520a5b3d71d5436376cb6ec2cc229690250107ed3a13462565666b39cd14a9f] <==
	* W0921 22:25:58.301074       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0921 22:25:58.301095       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0921 22:25:58.301162       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0921 22:25:58.301187       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0921 22:25:58.301245       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0921 22:25:58.301256       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0921 22:25:58.301266       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0921 22:25:58.301269       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0921 22:25:58.301326       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0921 22:25:58.301348       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0921 22:25:58.301355       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0921 22:25:58.301369       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0921 22:25:58.301392       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0921 22:25:58.301411       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0921 22:25:58.301416       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0921 22:25:58.301429       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0921 22:25:58.301487       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0921 22:25:58.301507       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0921 22:25:59.323662       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0921 22:25:59.323768       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0921 22:25:59.357767       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0921 22:25:59.357803       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0921 22:25:59.383080       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0921 22:25:59.383124       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0921 22:25:59.894726       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2022-09-21 22:21:22 UTC, end at Wed 2022-09-21 22:39:18 UTC. --
	Sep 21 22:37:51 no-preload-20220921220832-10174 kubelet[3866]: E0921 22:37:51.463984    3866 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:37:53 no-preload-20220921220832-10174 kubelet[3866]: I0921 22:37:53.106299    3866 scope.go:115] "RemoveContainer" containerID="2fe83f98a315c181347bc939a3196e2a9f4728c838042322e1ed89d8cbfcb61f"
	Sep 21 22:37:53 no-preload-20220921220832-10174 kubelet[3866]: E0921 22:37:53.106701    3866 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-ww9rl_kube-system(68c0d807-f3cb-4a87-8603-c99649d89553)\"" pod="kube-system/kindnet-ww9rl" podUID=68c0d807-f3cb-4a87-8603-c99649d89553
	Sep 21 22:37:56 no-preload-20220921220832-10174 kubelet[3866]: E0921 22:37:56.465556    3866 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:38:01 no-preload-20220921220832-10174 kubelet[3866]: E0921 22:38:01.467099    3866 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:38:05 no-preload-20220921220832-10174 kubelet[3866]: I0921 22:38:05.106468    3866 scope.go:115] "RemoveContainer" containerID="2fe83f98a315c181347bc939a3196e2a9f4728c838042322e1ed89d8cbfcb61f"
	Sep 21 22:38:05 no-preload-20220921220832-10174 kubelet[3866]: E0921 22:38:05.106782    3866 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-ww9rl_kube-system(68c0d807-f3cb-4a87-8603-c99649d89553)\"" pod="kube-system/kindnet-ww9rl" podUID=68c0d807-f3cb-4a87-8603-c99649d89553
	Sep 21 22:38:06 no-preload-20220921220832-10174 kubelet[3866]: E0921 22:38:06.467863    3866 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:38:11 no-preload-20220921220832-10174 kubelet[3866]: E0921 22:38:11.468229    3866 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:38:16 no-preload-20220921220832-10174 kubelet[3866]: I0921 22:38:16.105841    3866 scope.go:115] "RemoveContainer" containerID="2fe83f98a315c181347bc939a3196e2a9f4728c838042322e1ed89d8cbfcb61f"
	Sep 21 22:38:16 no-preload-20220921220832-10174 kubelet[3866]: E0921 22:38:16.106122    3866 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-ww9rl_kube-system(68c0d807-f3cb-4a87-8603-c99649d89553)\"" pod="kube-system/kindnet-ww9rl" podUID=68c0d807-f3cb-4a87-8603-c99649d89553
	Sep 21 22:38:16 no-preload-20220921220832-10174 kubelet[3866]: E0921 22:38:16.469741    3866 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:38:21 no-preload-20220921220832-10174 kubelet[3866]: E0921 22:38:21.471199    3866 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:38:26 no-preload-20220921220832-10174 kubelet[3866]: E0921 22:38:26.472368    3866 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:38:29 no-preload-20220921220832-10174 kubelet[3866]: I0921 22:38:29.106070    3866 scope.go:115] "RemoveContainer" containerID="2fe83f98a315c181347bc939a3196e2a9f4728c838042322e1ed89d8cbfcb61f"
	Sep 21 22:38:31 no-preload-20220921220832-10174 kubelet[3866]: E0921 22:38:31.473620    3866 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:38:36 no-preload-20220921220832-10174 kubelet[3866]: E0921 22:38:36.474761    3866 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:38:41 no-preload-20220921220832-10174 kubelet[3866]: E0921 22:38:41.475696    3866 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:38:46 no-preload-20220921220832-10174 kubelet[3866]: E0921 22:38:46.476879    3866 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:38:51 no-preload-20220921220832-10174 kubelet[3866]: E0921 22:38:51.478346    3866 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:38:56 no-preload-20220921220832-10174 kubelet[3866]: E0921 22:38:56.480107    3866 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:39:01 no-preload-20220921220832-10174 kubelet[3866]: E0921 22:39:01.480952    3866 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:39:06 no-preload-20220921220832-10174 kubelet[3866]: E0921 22:39:06.482492    3866 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:39:11 no-preload-20220921220832-10174 kubelet[3866]: E0921 22:39:11.484176    3866 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:39:16 no-preload-20220921220832-10174 kubelet[3866]: E0921 22:39:16.485533    3866 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20220921220832-10174 -n no-preload-20220921220832-10174
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-20220921220832-10174 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: coredns-565d847f94-86pzk metrics-server-5c8fd5cf8-qrk4q storage-provisioner dashboard-metrics-scraper-7b94984548-lsnrl kubernetes-dashboard-54596f475f-gh8ld
helpers_test.go:272: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context no-preload-20220921220832-10174 describe pod coredns-565d847f94-86pzk metrics-server-5c8fd5cf8-qrk4q storage-provisioner dashboard-metrics-scraper-7b94984548-lsnrl kubernetes-dashboard-54596f475f-gh8ld
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context no-preload-20220921220832-10174 describe pod coredns-565d847f94-86pzk metrics-server-5c8fd5cf8-qrk4q storage-provisioner dashboard-metrics-scraper-7b94984548-lsnrl kubernetes-dashboard-54596f475f-gh8ld: exit status 1 (60.001187ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-565d847f94-86pzk" not found
	Error from server (NotFound): pods "metrics-server-5c8fd5cf8-qrk4q" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-7b94984548-lsnrl" not found
	Error from server (NotFound): pods "kubernetes-dashboard-54596f475f-gh8ld" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context no-preload-20220921220832-10174 describe pod coredns-565d847f94-86pzk metrics-server-5c8fd5cf8-qrk4q storage-provisioner dashboard-metrics-scraper-7b94984548-lsnrl kubernetes-dashboard-54596f475f-gh8ld: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (542.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-54596f475f-z5fzh" [afcd3492-eb5b-4d68-a695-b4eaa614dbdf] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
E0921 22:34:20.482085   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/functional-20220921213235-10174/client.crt: no such file or directory
E0921 22:34:21.247642   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/old-k8s-version-20220921220722-10174/client.crt: no such file or directory
E0921 22:34:27.010008   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/auto-20220921215523-10174/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0921 22:41:31.496301   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/addons-20220921212740-10174/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0921 22:41:38.505011   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/ingress-addon-legacy-20220921213512-10174/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: timed out waiting for the condition ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220921221118-10174 -n default-k8s-different-port-20220921221118-10174
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2022-09-21 22:41:56.804122063 +0000 UTC m=+4484.942574072
start_stop_delete_test.go:274: (dbg) Run:  kubectl --context default-k8s-different-port-20220921221118-10174 describe po kubernetes-dashboard-54596f475f-z5fzh -n kubernetes-dashboard
start_stop_delete_test.go:274: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20220921221118-10174 describe po kubernetes-dashboard-54596f475f-z5fzh -n kubernetes-dashboard: context deadline exceeded (1.566µs)
start_stop_delete_test.go:274: kubectl --context default-k8s-different-port-20220921221118-10174 describe po kubernetes-dashboard-54596f475f-z5fzh -n kubernetes-dashboard: context deadline exceeded
start_stop_delete_test.go:274: (dbg) Run:  kubectl --context default-k8s-different-port-20220921221118-10174 logs kubernetes-dashboard-54596f475f-z5fzh -n kubernetes-dashboard
start_stop_delete_test.go:274: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20220921221118-10174 logs kubernetes-dashboard-54596f475f-z5fzh -n kubernetes-dashboard: context deadline exceeded (191ns)
start_stop_delete_test.go:274: kubectl --context default-k8s-different-port-20220921221118-10174 logs kubernetes-dashboard-54596f475f-z5fzh -n kubernetes-dashboard: context deadline exceeded
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: timed out waiting for the condition
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220921221118-10174
helpers_test.go:235: (dbg) docker inspect default-k8s-different-port-20220921221118-10174:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "37728b19138a640b956531d5c576658820e978bb8c95c5dc17cd7f348fdb8112",
	        "Created": "2022-09-21T22:11:25.759772693Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 283906,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-09-21T22:24:02.351378691Z",
	            "FinishedAt": "2022-09-21T22:24:01.088670196Z"
	        },
	        "Image": "sha256:5f58fddaff4349397c9f51a6b73926a9b118af22b4ccb4492e84c74d0b59dcd4",
	        "ResolvConfPath": "/var/lib/docker/containers/37728b19138a640b956531d5c576658820e978bb8c95c5dc17cd7f348fdb8112/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/37728b19138a640b956531d5c576658820e978bb8c95c5dc17cd7f348fdb8112/hostname",
	        "HostsPath": "/var/lib/docker/containers/37728b19138a640b956531d5c576658820e978bb8c95c5dc17cd7f348fdb8112/hosts",
	        "LogPath": "/var/lib/docker/containers/37728b19138a640b956531d5c576658820e978bb8c95c5dc17cd7f348fdb8112/37728b19138a640b956531d5c576658820e978bb8c95c5dc17cd7f348fdb8112-json.log",
	        "Name": "/default-k8s-different-port-20220921221118-10174",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-different-port-20220921221118-10174:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-different-port-20220921221118-10174",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a8a0164abb3fd782df784bf984b10c980312d3270f36e5612924a87806ba5a19-init/diff:/var/lib/docker/overlay2/e464f9a636cb97832b8aa5685163afb26b14d037782877c3831e0f261b6fb12b/diff:/var/lib/docker/overlay2/6f2f230003f0711dfcff3390931e14f9abd39be1cd2b674079cb6a0fc99dab7f/diff:/var/lib/docker/overlay2/a539506727a1471a83b57694b6923a137699d9d47fe00601ffa27c34d63d17c2/diff:/var/lib/docker/overlay2/7dc48fda616494e9959c05191521a1f5bdf6be0555a35fc1d7ce85b87a0522ad/diff:/var/lib/docker/overlay2/6a1c06da85f8c03c9712693ca8a75d804e7b9f135a6b92e3929385cc791b0b3a/diff:/var/lib/docker/overlay2/d178ee6cf0c027c7943422dc71b38eb93ee131daede423f6cf283424eaebc517/diff:/var/lib/docker/overlay2/f4c8d81a89934a29db99d9ed4b4542a8882465ab2b6419ba4a3fe09214d25d9e/diff:/var/lib/docker/overlay2/4411575e37772974371f716b7c4abb3053138ddcaf53d883eb3116b4c3a37f78/diff:/var/lib/docker/overlay2/a3f8db0c0d790efcede2f41e470d293e280b022ff0bbb46c8bc25f190e3146fc/diff:/var/lib/docker/overlay2/a5bfd3
703378d6d5b2622598a13b4b2eb9203659c1ffff9aa93944ef8d20d22c/diff:/var/lib/docker/overlay2/3c24c9c8a37c1a488d12209df991b3857ccbc448a3329e46a8d48fc28ef81b83/diff:/var/lib/docker/overlay2/199771963b33bc809e912a6d428c556897f0ba04db2268e08695cb7ce0ee3bad/diff:/var/lib/docker/overlay2/4bc13570e456a5d261fbaf68140f2e5b2ea10c6683623858e16ee3ed1b117ef7/diff:/var/lib/docker/overlay2/9831fdfd44eff7e3f4780d10f5480f090b388735bb188e7a1e93251b7769d8f3/diff:/var/lib/docker/overlay2/28126fe31ddfbf99d4588c5ac750503b9b6cee8f2e00781941cff5f4323d1c76/diff:/var/lib/docker/overlay2/173221e8ff5f6ec22870429dc8bb01272ade638fe47275d1daa1ce07952c8c8c/diff:/var/lib/docker/overlay2/53bc2e4fa25c7c694765cf4d57e59552b26eed732ba59b8d74b221ea0ada7479/diff:/var/lib/docker/overlay2/0175a1cb7e1c92247b6cbac270da49d811d3508e193c1c2e47bef8cb769a47bf/diff:/var/lib/docker/overlay2/1651afeb98cf83af553f3e0a4dcb564af84cb440147702fddc0133cdae08e879/diff:/var/lib/docker/overlay2/64c0563be8f8902fe0e9d207f839be98da0e23d6d839403a7e5498f0468e6b75/diff:/var/lib/d
ocker/overlay2/c9ce1f7c22e681e0e279ce7656144b4c75e2d24f7297acad69e621f2756b75e8/diff:/var/lib/docker/overlay2/469e6b8bbc078383af3dc64c254906774ab56a833c4de1dd39b46bfa27fee3b0/diff:/var/lib/docker/overlay2/797536b923ada4261904843a51188063c401089eae5fec971f1dfb3c21d000a8/diff:/var/lib/docker/overlay2/9d5cbab97d75347795fc5814806362514e12ffc998b48411ecf1d23f980badf6/diff:/var/lib/docker/overlay2/8681f41e3df379cfdc8fd52b2e5837512b8e36a64c87fb159548a876d16e691d/diff:/var/lib/docker/overlay2/78ea1917a0aca968f65d796cbc2ee95d7d190a700496b9e47b29884fe13b1bec/diff:/var/lib/docker/overlay2/c3c231a74b7992f1fdc04b623c10d7791eab4fd97c803ed2610d595097c682c2/diff:/var/lib/docker/overlay2/f7b38a2a87d3958318f3a4b244e33d8096cd001a8fcb916eec022b0679382900/diff:/var/lib/docker/overlay2/1107be3397c1dfc460decaf307982d427cc431beb7938fb0e78f4f7cbc0afd3b/diff:/var/lib/docker/overlay2/59319d0c4cc381deff6e51ba1cc718b7cfd4fc557e74b8e83bd228afca501c8c/diff:/var/lib/docker/overlay2/9d031408a1ab9825246b7bba81db2596cb92e70fbedc70fba9be1d25320
8529f/diff:/var/lib/docker/overlay2/5800b717c814431baf3cc17520b099e7e011915cdeb46ed6360ec2f831a926ec/diff:/var/lib/docker/overlay2/6660d8dfc6dbb7f41f871ff7ec7e0fdfdb2ca3480141cdd4cce92869cb5382bf/diff:/var/lib/docker/overlay2/525a566b79110ff18e78880f2652b8d9cfcfbdbf3272fedb650933fcbbf869bd/diff:/var/lib/docker/overlay2/a0730aa8e0952fc661048d3a7f986ff246a5b2a29ad4e8195970bb407e3cecfc/diff:/var/lib/docker/overlay2/f9d25765ae914f5f013b5f46292f03c39d82f675d2102904f5fabecf07189337/diff:/var/lib/docker/overlay2/b7b34d126964ff60bc0fb625da7343e53e8bb4e77d5e72e33257c28b3d395ca3/diff:/var/lib/docker/overlay2/8514d6cc3775ebdd86f683a78503419bc97346c0059ac0d48563d2edec9727f0/diff:/var/lib/docker/overlay2/dd6f7bead5718d42040b4ee90ce09fa15720d04f3c58e2964b1d5b241bf5b7ed/diff:/var/lib/docker/overlay2/1a8adeb0355a42e9284e241b527930f4f6b7108d459c10550d5d0c4451f9e924/diff:/var/lib/docker/overlay2/703991449e0070d93945e4435da56144ce54e461e877e0b084144663d46b10f6/diff:/var/lib/docker/overlay2/65855711cfa445d374537e6268fd1bea2f62ea
bf2f590842933254b4755050dd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a8a0164abb3fd782df784bf984b10c980312d3270f36e5612924a87806ba5a19/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a8a0164abb3fd782df784bf984b10c980312d3270f36e5612924a87806ba5a19/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a8a0164abb3fd782df784bf984b10c980312d3270f36e5612924a87806ba5a19/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-different-port-20220921221118-10174",
	                "Source": "/var/lib/docker/volumes/default-k8s-different-port-20220921221118-10174/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-different-port-20220921221118-10174",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-different-port-20220921221118-10174",
	                "name.minikube.sigs.k8s.io": "default-k8s-different-port-20220921221118-10174",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "36a93f9568ff0607fd762c264a5429499a3bd1c6641a087329f11f0872de9644",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49443"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49442"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49439"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49441"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49440"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/36a93f9568ff",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-different-port-20220921221118-10174": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "37728b19138a",
	                        "default-k8s-different-port-20220921221118-10174"
	                    ],
	                    "NetworkID": "e093ea2ee154cf6d0e5d3b4a191700b36287f8ecd49e1b54f684a8f299ea6b79",
	                    "EndpointID": "309e329d1f6701bbb84d1c083ed29999da2a9bd8b0ce2dba5c615ae7a0f15ea3",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220921221118-10174 -n default-k8s-different-port-20220921221118-10174
helpers_test.go:244: <<< TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-different-port-20220921221118-10174 logs -n 25
helpers_test.go:252: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| Command |                            Args                            |                     Profile                     |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p                                                         | old-k8s-version-20220921220722-10174            | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | old-k8s-version-20220921220722-10174                       |                                                 |         |         |                     |                     |
	| start   | -p newest-cni-20220921221720-10174 --memory=2200           | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.2                               |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | embed-certs-20220921220439-10174                | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | embed-certs-20220921220439-10174                           |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                     |                     |
	| stop    | -p                                                         | embed-certs-20220921220439-10174                | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | embed-certs-20220921220439-10174                           |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | embed-certs-20220921220439-10174                | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | embed-certs-20220921220439-10174                           |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                     |                     |
	| start   | -p                                                         | embed-certs-20220921220439-10174                | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC |                     |
	|         | embed-certs-20220921220439-10174                           |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                     |                     |
	|         | --wait=true --embed-certs                                  |                                                 |         |         |                     |                     |
	|         | --driver=docker                                            |                                                 |         |         |                     |                     |
	|         | --container-runtime=containerd                             |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.2                               |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | newest-cni-20220921221720-10174                            |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                     |                     |
	| stop    | -p                                                         | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | newest-cni-20220921221720-10174                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:17 UTC |
	|         | newest-cni-20220921221720-10174                            |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                     |                     |
	| start   | -p newest-cni-20220921221720-10174 --memory=2200           | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:17 UTC | 21 Sep 22 22:18 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=containerd            |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.2                               |                                                 |         |         |                     |                     |
	| ssh     | -p                                                         | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:18 UTC | 21 Sep 22 22:18 UTC |
	|         | newest-cni-20220921221720-10174                            |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                     |                     |
	| pause   | -p                                                         | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:18 UTC | 21 Sep 22 22:18 UTC |
	|         | newest-cni-20220921221720-10174                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| unpause | -p                                                         | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:18 UTC | 21 Sep 22 22:18 UTC |
	|         | newest-cni-20220921221720-10174                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:18 UTC | 21 Sep 22 22:18 UTC |
	|         | newest-cni-20220921221720-10174                            |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | newest-cni-20220921221720-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:18 UTC | 21 Sep 22 22:18 UTC |
	|         | newest-cni-20220921221720-10174                            |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | no-preload-20220921220832-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:21 UTC | 21 Sep 22 22:21 UTC |
	|         | no-preload-20220921220832-10174                            |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                     |                     |
	| stop    | -p                                                         | no-preload-20220921220832-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:21 UTC | 21 Sep 22 22:21 UTC |
	|         | no-preload-20220921220832-10174                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | no-preload-20220921220832-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:21 UTC | 21 Sep 22 22:21 UTC |
	|         | no-preload-20220921220832-10174                            |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                     |                     |
	| start   | -p                                                         | no-preload-20220921220832-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:21 UTC |                     |
	|         | no-preload-20220921220832-10174                            |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                     |                     |
	|         | --wait=true --preload=false                                |                                                 |         |         |                     |                     |
	|         | --driver=docker                                            |                                                 |         |         |                     |                     |
	|         | --container-runtime=containerd                             |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.2                               |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | default-k8s-different-port-20220921221118-10174 | jenkins | v1.27.0 | 21 Sep 22 22:23 UTC | 21 Sep 22 22:23 UTC |
	|         | default-k8s-different-port-20220921221118-10174            |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                     |                     |
	| stop    | -p                                                         | default-k8s-different-port-20220921221118-10174 | jenkins | v1.27.0 | 21 Sep 22 22:23 UTC | 21 Sep 22 22:24 UTC |
	|         | default-k8s-different-port-20220921221118-10174            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | default-k8s-different-port-20220921221118-10174 | jenkins | v1.27.0 | 21 Sep 22 22:24 UTC | 21 Sep 22 22:24 UTC |
	|         | default-k8s-different-port-20220921221118-10174            |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                     |                     |
	| start   | -p                                                         | default-k8s-different-port-20220921221118-10174 | jenkins | v1.27.0 | 21 Sep 22 22:24 UTC |                     |
	|         | default-k8s-different-port-20220921221118-10174            |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true                |                                                 |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker                      |                                                 |         |         |                     |                     |
	|         |  --container-runtime=containerd                            |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.2                               |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | embed-certs-20220921220439-10174                | jenkins | v1.27.0 | 21 Sep 22 22:35 UTC | 21 Sep 22 22:35 UTC |
	|         | embed-certs-20220921220439-10174                           |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | no-preload-20220921220832-10174                 | jenkins | v1.27.0 | 21 Sep 22 22:39 UTC | 21 Sep 22 22:39 UTC |
	|         | no-preload-20220921220832-10174                            |                                                 |         |         |                     |                     |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/09/21 22:24:01
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.19.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0921 22:24:01.692796  283599 out.go:296] Setting OutFile to fd 1 ...
	I0921 22:24:01.693211  283599 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:24:01.693232  283599 out.go:309] Setting ErrFile to fd 2...
	I0921 22:24:01.693240  283599 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 22:24:01.693504  283599 root.go:333] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/bin
	I0921 22:24:01.694665  283599 out.go:303] Setting JSON to false
	I0921 22:24:01.696140  283599 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3993,"bootTime":1663795049,"procs":467,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1017-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0921 22:24:01.696247  283599 start.go:125] virtualization: kvm guest
	I0921 22:24:01.698874  283599 out.go:177] * [default-k8s-different-port-20220921221118-10174] minikube v1.27.0 on Ubuntu 20.04 (kvm/amd64)
	I0921 22:24:01.701214  283599 out.go:177]   - MINIKUBE_LOCATION=14995
	I0921 22:24:01.701128  283599 notify.go:214] Checking for updates...
	I0921 22:24:01.703092  283599 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0921 22:24:01.704791  283599 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
	I0921 22:24:01.706544  283599 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube
	I0921 22:24:01.708172  283599 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0921 22:23:57.318050  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:23:59.318317  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:01.710349  283599 config.go:180] Loaded profile config "default-k8s-different-port-20220921221118-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.2
	I0921 22:24:01.710930  283599 driver.go:365] Setting default libvirt URI to qemu:///system
	I0921 22:24:01.744026  283599 docker.go:137] docker version: linux-20.10.18
	I0921 22:24:01.744136  283599 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:24:01.840732  283599 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:true NGoroutines:44 SystemTime:2022-09-21 22:24:01.764457724 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1017-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.18 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0921 22:24:01.840851  283599 docker.go:254] overlay module found
	I0921 22:24:01.843051  283599 out.go:177] * Using the docker driver based on existing profile
	I0921 22:24:01.844347  283599 start.go:284] selected driver: docker
	I0921 22:24:01.844371  283599 start.go:808] validating driver "docker" against &{Name:default-k8s-different-port-20220921221118-10174 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:default-k8s-different-port-20220921221118-10174 Na
mespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountSt
ring:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 22:24:01.844475  283599 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0921 22:24:01.845300  283599 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 22:24:01.940944  283599 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:true NGoroutines:44 SystemTime:2022-09-21 22:24:01.86716064 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1017-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.18 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientI
nfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0921 22:24:01.941199  283599 start_flags.go:867] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0921 22:24:01.941223  283599 cni.go:95] Creating CNI manager for ""
	I0921 22:24:01.941231  283599 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0921 22:24:01.941249  283599 start_flags.go:316] config:
	{Name:default-k8s-different-port-20220921221118-10174 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:default-k8s-different-port-20220921221118-10174 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDo
main:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 22:24:01.944240  283599 out.go:177] * Starting control plane node default-k8s-different-port-20220921221118-10174 in cluster default-k8s-different-port-20220921221118-10174
	I0921 22:24:01.945596  283599 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0921 22:24:01.946905  283599 out.go:177] * Pulling base image ...
	I0921 22:24:01.948255  283599 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime containerd
	I0921 22:24:01.948306  283599 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.2-containerd-overlay2-amd64.tar.lz4
	I0921 22:24:01.948321  283599 cache.go:57] Caching tarball of preloaded images
	I0921 22:24:01.948361  283599 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	I0921 22:24:01.948572  283599 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0921 22:24:01.948588  283599 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.2 on containerd
	I0921 22:24:01.948702  283599 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/config.json ...
	I0921 22:24:01.976413  283599 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon, skipping pull
	I0921 22:24:01.976445  283599 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c exists in daemon, skipping load
	I0921 22:24:01.976457  283599 cache.go:208] Successfully downloaded all kic artifacts
	I0921 22:24:01.976502  283599 start.go:364] acquiring machines lock for default-k8s-different-port-20220921221118-10174: {Name:mk6a2906d520bc1db61074ef435cf249d094e940 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0921 22:24:01.976622  283599 start.go:368] acquired machines lock for "default-k8s-different-port-20220921221118-10174" in 78.111µs
	I0921 22:24:01.976652  283599 start.go:96] Skipping create...Using existing machine configuration
	I0921 22:24:01.976660  283599 fix.go:55] fixHost starting: 
	I0921 22:24:01.976899  283599 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221118-10174 --format={{.State.Status}}
	I0921 22:24:02.002084  283599 fix.go:103] recreateIfNeeded on default-k8s-different-port-20220921221118-10174: state=Stopped err=<nil>
	W0921 22:24:02.002122  283599 fix.go:129] unexpected machine state, will restart: <nil>
	I0921 22:24:02.004632  283599 out.go:177] * Restarting existing docker container for "default-k8s-different-port-20220921221118-10174" ...
	I0921 22:24:00.289698  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:02.790230  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:02.006307  283599 cli_runner.go:164] Run: docker start default-k8s-different-port-20220921221118-10174
	I0921 22:24:02.358108  283599 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221118-10174 --format={{.State.Status}}
	I0921 22:24:02.385298  283599 kic.go:415] container "default-k8s-different-port-20220921221118-10174" state is running.
	I0921 22:24:02.385684  283599 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220921221118-10174
	I0921 22:24:02.412757  283599 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/config.json ...
	I0921 22:24:02.412997  283599 machine.go:88] provisioning docker machine ...
	I0921 22:24:02.413031  283599 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20220921221118-10174"
	I0921 22:24:02.413108  283599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:24:02.438229  283599 main.go:134] libmachine: Using SSH client type: native
	I0921 22:24:02.438400  283599 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ec520] 0x7ef6a0 <nil>  [] 0s} 127.0.0.1 49443 <nil> <nil>}
	I0921 22:24:02.438416  283599 main.go:134] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20220921221118-10174 && echo "default-k8s-different-port-20220921221118-10174" | sudo tee /etc/hostname
	I0921 22:24:02.439038  283599 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34230->127.0.0.1:49443: read: connection reset by peer
	I0921 22:24:05.584682  283599 main.go:134] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20220921221118-10174
	
	I0921 22:24:05.584766  283599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:24:05.608825  283599 main.go:134] libmachine: Using SSH client type: native
	I0921 22:24:05.609026  283599 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ec520] 0x7ef6a0 <nil>  [] 0s} 127.0.0.1 49443 <nil> <nil>}
	I0921 22:24:05.609059  283599 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20220921221118-10174' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20220921221118-10174/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20220921221118-10174' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0921 22:24:05.739656  283599 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0921 22:24:05.739694  283599 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube}
	I0921 22:24:05.739749  283599 ubuntu.go:177] setting up certificates
	I0921 22:24:05.739765  283599 provision.go:83] configureAuth start
	I0921 22:24:05.739824  283599 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220921221118-10174
	I0921 22:24:05.764789  283599 provision.go:138] copyHostCerts
	I0921 22:24:05.764839  283599 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.pem, removing ...
	I0921 22:24:05.764846  283599 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.pem
	I0921 22:24:05.764904  283599 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.pem (1078 bytes)
	I0921 22:24:05.764993  283599 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cert.pem, removing ...
	I0921 22:24:05.765005  283599 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cert.pem
	I0921 22:24:05.765028  283599 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cert.pem (1123 bytes)
	I0921 22:24:05.765086  283599 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/key.pem, removing ...
	I0921 22:24:05.765095  283599 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/key.pem
	I0921 22:24:05.765118  283599 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/key.pem (1675 bytes)
	I0921 22:24:05.765169  283599 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20220921221118-10174 san=[192.168.85.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20220921221118-10174]
	I0921 22:24:05.914466  283599 provision.go:172] copyRemoteCerts
	I0921 22:24:05.914534  283599 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0921 22:24:05.914564  283599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:24:05.939805  283599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49443 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa Username:docker}
	I0921 22:24:06.031315  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0921 22:24:06.048618  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server.pem --> /etc/docker/server.pem (1310 bytes)
	I0921 22:24:06.065530  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0921 22:24:06.083800  283599 provision.go:86] duration metric: configureAuth took 344.021748ms
	I0921 22:24:06.083828  283599 ubuntu.go:193] setting minikube options for container-runtime
	I0921 22:24:06.083988  283599 config.go:180] Loaded profile config "default-k8s-different-port-20220921221118-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.2
	I0921 22:24:06.083999  283599 machine.go:91] provisioned docker machine in 3.670987023s
	I0921 22:24:06.084006  283599 start.go:300] post-start starting for "default-k8s-different-port-20220921221118-10174" (driver="docker")
	I0921 22:24:06.084012  283599 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0921 22:24:06.084049  283599 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0921 22:24:06.084088  283599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:24:06.108286  283599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49443 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa Username:docker}
	I0921 22:24:06.203139  283599 ssh_runner.go:195] Run: cat /etc/os-release
	I0921 22:24:06.205811  283599 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0921 22:24:06.205839  283599 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0921 22:24:06.205852  283599 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0921 22:24:06.205864  283599 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0921 22:24:06.205880  283599 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/addons for local assets ...
	I0921 22:24:06.205944  283599 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files for local assets ...
	I0921 22:24:06.206037  283599 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/101742.pem -> 101742.pem in /etc/ssl/certs
	I0921 22:24:06.206142  283599 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0921 22:24:06.212569  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/101742.pem --> /etc/ssl/certs/101742.pem (1708 bytes)
	I0921 22:24:06.229418  283599 start.go:303] post-start completed in 145.398445ms
	I0921 22:24:06.229483  283599 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 22:24:06.229517  283599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:24:06.253305  283599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49443 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa Username:docker}
	I0921 22:24:06.340119  283599 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0921 22:24:06.344050  283599 fix.go:57] fixHost completed within 4.367385464s
	I0921 22:24:06.344071  283599 start.go:83] releasing machines lock for "default-k8s-different-port-20220921221118-10174", held for 4.367430848s
	I0921 22:24:06.344157  283599 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220921221118-10174
	I0921 22:24:06.368445  283599 ssh_runner.go:195] Run: systemctl --version
	I0921 22:24:06.368501  283599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:24:06.368505  283599 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0921 22:24:06.368550  283599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:24:06.394444  283599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49443 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa Username:docker}
	I0921 22:24:06.396066  283599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49443 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa Username:docker}
	I0921 22:24:06.513229  283599 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0921 22:24:06.524587  283599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0921 22:24:06.533746  283599 docker.go:188] disabling docker service ...
	I0921 22:24:06.533795  283599 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0921 22:24:06.543075  283599 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0921 22:24:06.551813  283599 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0921 22:24:06.629483  283599 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0921 22:24:01.818416  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:04.317912  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:05.288966  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:07.290168  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:06.707030  283599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0921 22:24:06.717244  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0921 22:24:06.729638  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "registry.k8s.io/pause:3.8"|' -i /etc/containerd/config.toml"
	I0921 22:24:06.737194  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0921 22:24:06.744928  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0921 22:24:06.752650  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.mk"|' -i /etc/containerd/config.toml"
	I0921 22:24:06.760419  283599 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0921 22:24:06.766584  283599 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0921 22:24:06.772903  283599 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0921 22:24:06.844578  283599 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0921 22:24:06.917291  283599 start.go:450] Will wait 60s for socket path /run/containerd/containerd.sock
	I0921 22:24:06.917353  283599 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0921 22:24:06.921118  283599 start.go:471] Will wait 60s for crictl version
	I0921 22:24:06.921184  283599 ssh_runner.go:195] Run: sudo crictl version
	I0921 22:24:06.948257  283599 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-09-21T22:24:06Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0921 22:24:06.817672  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:09.317278  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:11.317829  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:09.789185  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:12.289080  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:13.817410  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:15.817496  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:17.995620  283599 ssh_runner.go:195] Run: sudo crictl version
	I0921 22:24:18.018705  283599 start.go:480] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.8
	RuntimeApiVersion:  v1alpha2
	I0921 22:24:18.018768  283599 ssh_runner.go:195] Run: containerd --version
	I0921 22:24:18.047337  283599 ssh_runner.go:195] Run: containerd --version
	I0921 22:24:18.078051  283599 out.go:177] * Preparing Kubernetes v1.25.2 on containerd 1.6.8 ...
	I0921 22:24:14.289667  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:16.789199  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:18.079491  283599 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220921221118-10174 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0921 22:24:18.103308  283599 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0921 22:24:18.106553  283599 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0921 22:24:18.115993  283599 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime containerd
	I0921 22:24:18.116056  283599 ssh_runner.go:195] Run: sudo crictl images --output json
	I0921 22:24:18.139896  283599 containerd.go:553] all images are preloaded for containerd runtime.
	I0921 22:24:18.139921  283599 containerd.go:467] Images already preloaded, skipping extraction
	I0921 22:24:18.139964  283599 ssh_runner.go:195] Run: sudo crictl images --output json
	I0921 22:24:18.163323  283599 containerd.go:553] all images are preloaded for containerd runtime.
	I0921 22:24:18.163344  283599 cache_images.go:84] Images are preloaded, skipping loading
	I0921 22:24:18.163382  283599 ssh_runner.go:195] Run: sudo crictl info
	I0921 22:24:18.186911  283599 cni.go:95] Creating CNI manager for ""
	I0921 22:24:18.186935  283599 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0921 22:24:18.186948  283599 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0921 22:24:18.186961  283599 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.25.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20220921221118-10174 NodeName:default-k8s-different-port-20220921221118-10174 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgr
oupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
	I0921 22:24:18.187074  283599 kubeadm.go:161] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "default-k8s-different-port-20220921221118-10174"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0921 22:24:18.187152  283599 kubeadm.go:962] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=default-k8s-different-port-20220921221118-10174 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.2 ClusterName:default-k8s-different-port-20220921221118-10174 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0921 22:24:18.187196  283599 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.2
	I0921 22:24:18.194012  283599 binaries.go:44] Found k8s binaries, skipping transfer
	I0921 22:24:18.194081  283599 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0921 22:24:18.200606  283599 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (540 bytes)
	I0921 22:24:18.212899  283599 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0921 22:24:18.224754  283599 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2076 bytes)
	I0921 22:24:18.236775  283599 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0921 22:24:18.239439  283599 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0921 22:24:18.248263  283599 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174 for IP: 192.168.85.2
	I0921 22:24:18.248377  283599 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.key
	I0921 22:24:18.248421  283599 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/proxy-client-ca.key
	I0921 22:24:18.248485  283599 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/client.key
	I0921 22:24:18.248538  283599 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/apiserver.key.43b9df8c
	I0921 22:24:18.248575  283599 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/proxy-client.key
	I0921 22:24:18.248658  283599 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/10174.pem (1338 bytes)
	W0921 22:24:18.248689  283599 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/10174_empty.pem, impossibly tiny 0 bytes
	I0921 22:24:18.248705  283599 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca-key.pem (1679 bytes)
	I0921 22:24:18.248729  283599 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem (1078 bytes)
	I0921 22:24:18.248758  283599 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/cert.pem (1123 bytes)
	I0921 22:24:18.248780  283599 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/key.pem (1675 bytes)
	I0921 22:24:18.248846  283599 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/101742.pem (1708 bytes)
	I0921 22:24:18.249439  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0921 22:24:18.265894  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0921 22:24:18.282128  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0921 22:24:18.298690  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/default-k8s-different-port-20220921221118-10174/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0921 22:24:18.315323  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0921 22:24:18.331842  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0921 22:24:18.348196  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0921 22:24:18.364368  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0921 22:24:18.380401  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0921 22:24:18.396696  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/10174.pem --> /usr/share/ca-certificates/10174.pem (1338 bytes)
	I0921 22:24:18.413238  283599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/101742.pem --> /usr/share/ca-certificates/101742.pem (1708 bytes)
	I0921 22:24:18.429482  283599 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0921 22:24:18.441654  283599 ssh_runner.go:195] Run: openssl version
	I0921 22:24:18.446184  283599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0921 22:24:18.453215  283599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0921 22:24:18.456119  283599 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Sep 21 21:28 /usr/share/ca-certificates/minikubeCA.pem
	I0921 22:24:18.456166  283599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0921 22:24:18.460690  283599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0921 22:24:18.467196  283599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10174.pem && ln -fs /usr/share/ca-certificates/10174.pem /etc/ssl/certs/10174.pem"
	I0921 22:24:18.474449  283599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10174.pem
	I0921 22:24:18.477401  283599 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Sep 21 21:32 /usr/share/ca-certificates/10174.pem
	I0921 22:24:18.477445  283599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10174.pem
	I0921 22:24:18.481956  283599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10174.pem /etc/ssl/certs/51391683.0"
	I0921 22:24:18.488418  283599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/101742.pem && ln -fs /usr/share/ca-certificates/101742.pem /etc/ssl/certs/101742.pem"
	I0921 22:24:18.495604  283599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/101742.pem
	I0921 22:24:18.498556  283599 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Sep 21 21:32 /usr/share/ca-certificates/101742.pem
	I0921 22:24:18.498600  283599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/101742.pem
	I0921 22:24:18.503245  283599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/101742.pem /etc/ssl/certs/3ec20f2e.0"
	I0921 22:24:18.509856  283599 kubeadm.go:396] StartCluster: {Name:default-k8s-different-port-20220921221118-10174 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:default-k8s-different-port-20220921221118-10174 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/
minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 22:24:18.509953  283599 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0921 22:24:18.509985  283599 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0921 22:24:18.533346  283599 cri.go:87] found id: "1058c41aafbb87ce5fc70eca7e064de03a01eb48e2da027b4515730997b03de5"
	I0921 22:24:18.533375  283599 cri.go:87] found id: "e1b3d54125fe2aff15e0a996ae0be62597db4065ff9a9d41a3b0e598b97b1608"
	I0921 22:24:18.533382  283599 cri.go:87] found id: "2654f64b12dee801006bf0c4b82dc6839422d58571d4bc74cee43fe7fdfe9b01"
	I0921 22:24:18.533388  283599 cri.go:87] found id: "1bb50adc50b7e339b9e7fee763126f6f17a621dcc78637bab78187b50e68bbf2"
	I0921 22:24:18.533393  283599 cri.go:87] found id: "9e931b83ea689a63f7a3861c1e0f7076f722e68890221c6209b05b3b852c12d7"
	I0921 22:24:18.533402  283599 cri.go:87] found id: "8a3d458869b0951d2fe3dd93e9d05a549b058f95f84b2ad0685292ebb3428767"
	I0921 22:24:18.533407  283599 cri.go:87] found id: ""
	I0921 22:24:18.533444  283599 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0921 22:24:18.545553  283599 cri.go:114] JSON = null
	W0921 22:24:18.545605  283599 kubeadm.go:403] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I0921 22:24:18.545686  283599 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0921 22:24:18.552635  283599 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I0921 22:24:18.552664  283599 kubeadm.go:627] restartCluster start
	I0921 22:24:18.552705  283599 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0921 22:24:18.558944  283599 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:18.559817  283599 kubeconfig.go:116] verify returned: extract IP: "default-k8s-different-port-20220921221118-10174" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
	I0921 22:24:18.560296  283599 kubeconfig.go:127] "default-k8s-different-port-20220921221118-10174" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig - will repair!
	I0921 22:24:18.561146  283599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig: {Name:mkdec5faf1bb182b50a3cd5458b352f38075dc15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:24:18.562655  283599 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0921 22:24:18.568841  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:18.568884  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:18.576584  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:18.776932  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:18.777023  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:18.786228  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:18.977461  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:18.977542  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:18.986186  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:19.177398  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:19.177487  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:19.186159  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:19.377453  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:19.377534  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:19.385921  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:19.577206  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:19.577296  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:19.586370  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:19.777572  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:19.777676  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:19.786797  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:19.977103  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:19.977188  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:19.985822  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:20.177132  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:20.177234  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:20.185876  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:20.377187  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:20.377298  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:20.386086  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:20.577399  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:20.577488  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:20.586142  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:20.777447  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:20.777527  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:20.786547  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:20.976769  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:20.976865  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:20.985682  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:21.176870  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:21.176951  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:21.185811  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:21.377116  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:21.377184  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:21.385829  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:21.577109  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:21.577202  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:21.585911  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:21.585933  283599 api_server.go:165] Checking apiserver status ...
	I0921 22:24:21.585979  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0921 22:24:21.593866  283599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:21.593893  283599 kubeadm.go:602] needs reconfigure: apiserver error: timed out waiting for the condition
	I0921 22:24:21.593899  283599 kubeadm.go:1114] stopping kube-system containers ...
	I0921 22:24:21.593908  283599 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0921 22:24:21.593964  283599 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0921 22:24:21.618017  283599 cri.go:87] found id: "1058c41aafbb87ce5fc70eca7e064de03a01eb48e2da027b4515730997b03de5"
	I0921 22:24:21.618041  283599 cri.go:87] found id: "e1b3d54125fe2aff15e0a996ae0be62597db4065ff9a9d41a3b0e598b97b1608"
	I0921 22:24:21.618048  283599 cri.go:87] found id: "2654f64b12dee801006bf0c4b82dc6839422d58571d4bc74cee43fe7fdfe9b01"
	I0921 22:24:21.618058  283599 cri.go:87] found id: "1bb50adc50b7e339b9e7fee763126f6f17a621dcc78637bab78187b50e68bbf2"
	I0921 22:24:21.618064  283599 cri.go:87] found id: "9e931b83ea689a63f7a3861c1e0f7076f722e68890221c6209b05b3b852c12d7"
	I0921 22:24:21.618072  283599 cri.go:87] found id: "8a3d458869b0951d2fe3dd93e9d05a549b058f95f84b2ad0685292ebb3428767"
	I0921 22:24:21.618078  283599 cri.go:87] found id: ""
	I0921 22:24:21.618082  283599 cri.go:232] Stopping containers: [1058c41aafbb87ce5fc70eca7e064de03a01eb48e2da027b4515730997b03de5 e1b3d54125fe2aff15e0a996ae0be62597db4065ff9a9d41a3b0e598b97b1608 2654f64b12dee801006bf0c4b82dc6839422d58571d4bc74cee43fe7fdfe9b01 1bb50adc50b7e339b9e7fee763126f6f17a621dcc78637bab78187b50e68bbf2 9e931b83ea689a63f7a3861c1e0f7076f722e68890221c6209b05b3b852c12d7 8a3d458869b0951d2fe3dd93e9d05a549b058f95f84b2ad0685292ebb3428767]
	I0921 22:24:21.618118  283599 ssh_runner.go:195] Run: which crictl
	I0921 22:24:21.621347  283599 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop 1058c41aafbb87ce5fc70eca7e064de03a01eb48e2da027b4515730997b03de5 e1b3d54125fe2aff15e0a996ae0be62597db4065ff9a9d41a3b0e598b97b1608 2654f64b12dee801006bf0c4b82dc6839422d58571d4bc74cee43fe7fdfe9b01 1bb50adc50b7e339b9e7fee763126f6f17a621dcc78637bab78187b50e68bbf2 9e931b83ea689a63f7a3861c1e0f7076f722e68890221c6209b05b3b852c12d7 8a3d458869b0951d2fe3dd93e9d05a549b058f95f84b2ad0685292ebb3428767
	I0921 22:24:21.645531  283599 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0921 22:24:21.655622  283599 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0921 22:24:21.662408  283599 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Sep 21 22:11 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Sep 21 22:11 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2127 Sep 21 22:11 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Sep 21 22:11 /etc/kubernetes/scheduler.conf
	
	I0921 22:24:21.662459  283599 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0921 22:24:21.669029  283599 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0921 22:24:21.675699  283599 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0921 22:24:21.682316  283599 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:21.682358  283599 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0921 22:24:21.688501  283599 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0921 22:24:17.817867  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:19.818111  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:18.789856  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:21.289803  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:21.694928  283599 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0921 22:24:21.696684  283599 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0921 22:24:21.703329  283599 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0921 22:24:21.710109  283599 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0921 22:24:21.710132  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0921 22:24:21.757457  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0921 22:24:22.810948  283599 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.053458682s)
	I0921 22:24:22.810976  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0921 22:24:22.943243  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0921 22:24:22.995873  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0921 22:24:23.097694  283599 api_server.go:51] waiting for apiserver process to appear ...
	I0921 22:24:23.097766  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 22:24:23.608210  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 22:24:24.107567  283599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 22:24:24.187217  283599 api_server.go:71] duration metric: took 1.089523123s to wait for apiserver process to appear ...
	I0921 22:24:24.187296  283599 api_server.go:87] waiting for apiserver healthz status ...
	I0921 22:24:24.187323  283599 api_server.go:240] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I0921 22:24:24.187688  283599 api_server.go:256] stopped: https://192.168.85.2:8444/healthz: Get "https://192.168.85.2:8444/healthz": dial tcp 192.168.85.2:8444: connect: connection refused
	I0921 22:24:24.688449  283599 api_server.go:240] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I0921 22:24:22.317667  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:24.317872  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:23.789425  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:25.789684  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:27.790412  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:27.592182  283599 api_server.go:266] https://192.168.85.2:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0921 22:24:27.592315  283599 api_server.go:102] status: https://192.168.85.2:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0921 22:24:27.688579  283599 api_server.go:240] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I0921 22:24:27.694601  283599 api_server.go:266] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0921 22:24:27.694667  283599 api_server.go:102] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0921 22:24:28.187832  283599 api_server.go:240] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I0921 22:24:28.192979  283599 api_server.go:266] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0921 22:24:28.193004  283599 api_server.go:102] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0921 22:24:28.688623  283599 api_server.go:240] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I0921 22:24:28.695172  283599 api_server.go:266] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0921 22:24:28.695285  283599 api_server.go:102] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0921 22:24:29.187841  283599 api_server.go:240] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I0921 22:24:29.193157  283599 api_server.go:266] https://192.168.85.2:8444/healthz returned 200:
	ok
	I0921 22:24:29.198775  283599 api_server.go:140] control plane version: v1.25.2
	I0921 22:24:29.198796  283599 api_server.go:130] duration metric: took 5.011488882s to wait for apiserver health ...
	I0921 22:24:29.198805  283599 cni.go:95] Creating CNI manager for ""
	I0921 22:24:29.198812  283599 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0921 22:24:29.201314  283599 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0921 22:24:29.202798  283599 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0921 22:24:29.206616  283599 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.2/kubectl ...
	I0921 22:24:29.206636  283599 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0921 22:24:29.221913  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0921 22:24:29.826767  283599 system_pods.go:43] waiting for kube-system pods to appear ...
	I0921 22:24:29.834488  283599 system_pods.go:59] 9 kube-system pods found
	I0921 22:24:29.834517  283599 system_pods.go:61] "coredns-565d847f94-mrkjn" [7f364c47-74ce-4271-aab1-67bba320c586] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0921 22:24:29.834528  283599 system_pods.go:61] "etcd-default-k8s-different-port-20220921221118-10174" [8f0f58a7-7eae-43db-840f-bde95464e94e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0921 22:24:29.834533  283599 system_pods.go:61] "kindnet-7wbpp" [3f16ae0b-2f66-4f1e-b234-74570472a7f8] Running
	I0921 22:24:29.834539  283599 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220921221118-10174" [3a935d6b-ca77-4bcb-ae19-0a2af77c12a1] Running
	I0921 22:24:29.834544  283599 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220921221118-10174" [d01ee91a-5587-48e9-a235-68a73d5fedef] Running
	I0921 22:24:29.834549  283599 system_pods.go:61] "kube-proxy-lzphc" [611dbd37-0771-41b2-b886-93f46d79f802] Running
	I0921 22:24:29.834554  283599 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220921221118-10174" [998713da-f133-43f7-9f11-c6110ad66c8d] Running
	I0921 22:24:29.834561  283599 system_pods.go:61] "metrics-server-5c8fd5cf8-sshzh" [5972fae5-09c2-4e2e-b609-ef85f72311e4] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0921 22:24:29.834572  283599 system_pods.go:61] "storage-provisioner" [ca16dea1-fb3d-4cc1-b449-2236aefcc627] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0921 22:24:29.834577  283599 system_pods.go:74] duration metric: took 7.786123ms to wait for pod list to return data ...
	I0921 22:24:29.834588  283599 node_conditions.go:102] verifying NodePressure condition ...
	I0921 22:24:29.837059  283599 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0921 22:24:29.837085  283599 node_conditions.go:123] node cpu capacity is 8
	I0921 22:24:29.837096  283599 node_conditions.go:105] duration metric: took 2.500371ms to run NodePressure ...
	I0921 22:24:29.837121  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0921 22:24:30.025715  283599 kubeadm.go:763] waiting for restarted kubelet to initialise ...
	I0921 22:24:30.029542  283599 kubeadm.go:778] kubelet initialised
	I0921 22:24:30.029565  283599 kubeadm.go:779] duration metric: took 3.826857ms waiting for restarted kubelet to initialise ...
	I0921 22:24:30.029572  283599 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0921 22:24:30.034316  283599 pod_ready.go:78] waiting up to 4m0s for pod "coredns-565d847f94-mrkjn" in "kube-system" namespace to be "Ready" ...
	I0921 22:24:26.817684  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:29.317793  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:31.318001  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:30.289213  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:32.789335  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:32.039865  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:34.040511  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:36.539322  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:33.817371  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:35.817456  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:34.789530  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:37.289284  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:38.539700  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:41.040333  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:37.817967  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:40.318244  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:39.789967  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:42.289726  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:43.539636  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:45.540134  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:42.817716  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:44.818139  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:44.789355  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:47.288847  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:48.040425  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:50.539475  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:47.317825  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:49.318211  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:49.289182  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:51.289938  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:52.539590  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:54.540310  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:51.817491  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:53.818080  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:55.818165  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:53.789719  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:56.289013  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:24:57.040311  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:59.539775  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:58.318151  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:00.318254  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:24:58.289251  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:00.789124  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:02.789910  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:02.040207  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:04.540336  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:02.817283  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:04.817911  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:05.290121  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:07.789553  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:07.039774  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:09.039928  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:11.040136  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:07.318317  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:09.817957  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:10.289528  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:12.789110  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:13.540022  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:16.040513  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:12.317490  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:14.818433  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:14.789413  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:16.789947  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:18.539457  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:21.040423  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:17.317880  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:19.817565  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:19.289330  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:21.789335  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:23.539701  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:26.039677  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:22.317640  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:24.318075  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:23.789488  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:25.789726  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:28.539400  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:30.540154  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:26.817737  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:28.818270  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:31.318310  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:28.289323  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:30.789442  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:32.789667  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:33.039502  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:35.039801  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:33.318392  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:35.818247  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:34.790488  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:37.288758  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:37.539221  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:39.539681  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:41.539999  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:38.317564  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:40.317641  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:39.289052  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:41.789424  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:44.040284  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:46.540320  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:42.818080  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:45.317732  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:44.289331  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:46.789866  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:49.039837  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:51.540123  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:47.817565  276511 pod_ready.go:102] pod "coredns-565d847f94-m8xgt" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:09:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:49.314620  276511 pod_ready.go:81] duration metric: took 4m0.002300536s waiting for pod "coredns-565d847f94-m8xgt" in "kube-system" namespace to be "Ready" ...
	E0921 22:25:49.314670  276511 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-565d847f94-m8xgt" in "kube-system" namespace to be "Ready" (will not retry!)
	I0921 22:25:49.314692  276511 pod_ready.go:38] duration metric: took 4m0.007078344s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0921 22:25:49.314717  276511 kubeadm.go:631] restartCluster took 4m10.710033944s
	W0921 22:25:49.314858  276511 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0921 22:25:49.314887  276511 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0921 22:25:49.289362  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:51.789574  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:54.040292  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:56.540637  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:25:52.154431  276511 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (2.839517184s)
	I0921 22:25:52.154487  276511 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0921 22:25:52.163969  276511 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0921 22:25:52.170969  276511 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0921 22:25:52.171027  276511 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0921 22:25:52.177996  276511 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0921 22:25:52.178063  276511 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0921 22:25:52.213969  276511 kubeadm.go:317] W0921 22:25:52.213140    3321 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0921 22:25:52.246713  276511 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1017-gcp\n", err: exit status 1
	I0921 22:25:52.310910  276511 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0921 22:25:54.288796  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:25:56.289801  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:26:01.184243  276511 kubeadm.go:317] [init] Using Kubernetes version: v1.25.2
	I0921 22:26:01.184314  276511 kubeadm.go:317] [preflight] Running pre-flight checks
	I0921 22:26:01.184416  276511 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I0921 22:26:01.184507  276511 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1017-gcp
	I0921 22:26:01.184592  276511 kubeadm.go:317] OS: Linux
	I0921 22:26:01.184673  276511 kubeadm.go:317] CGROUPS_CPU: enabled
	I0921 22:26:01.184737  276511 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I0921 22:26:01.184793  276511 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I0921 22:26:01.184856  276511 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I0921 22:26:01.184921  276511 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I0921 22:26:01.184985  276511 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I0921 22:26:01.185046  276511 kubeadm.go:317] CGROUPS_PIDS: enabled
	I0921 22:26:01.185099  276511 kubeadm.go:317] CGROUPS_HUGETLB: enabled
	I0921 22:26:01.185157  276511 kubeadm.go:317] CGROUPS_BLKIO: enabled
	I0921 22:26:01.185254  276511 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0921 22:26:01.185380  276511 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0921 22:26:01.185526  276511 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0921 22:26:01.185623  276511 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0921 22:26:01.187463  276511 out.go:204]   - Generating certificates and keys ...
	I0921 22:26:01.187540  276511 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0921 22:26:01.187594  276511 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0921 22:26:01.187659  276511 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0921 22:26:01.187785  276511 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I0921 22:26:01.187900  276511 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I0921 22:26:01.187958  276511 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I0921 22:26:01.188014  276511 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I0921 22:26:01.188086  276511 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I0921 22:26:01.188221  276511 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0921 22:26:01.188336  276511 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0921 22:26:01.188409  276511 kubeadm.go:317] [certs] Using the existing "sa" key
	I0921 22:26:01.188488  276511 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0921 22:26:01.188556  276511 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0921 22:26:01.188636  276511 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0921 22:26:01.188731  276511 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0921 22:26:01.188817  276511 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0921 22:26:01.188953  276511 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0921 22:26:01.189087  276511 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0921 22:26:01.189191  276511 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I0921 22:26:01.189310  276511 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0921 22:26:01.191284  276511 out.go:204]   - Booting up control plane ...
	I0921 22:26:01.191385  276511 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0921 22:26:01.191486  276511 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0921 22:26:01.191561  276511 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0921 22:26:01.191748  276511 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0921 22:26:01.191985  276511 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0921 22:26:01.192105  276511 kubeadm.go:317] [apiclient] All control plane components are healthy after 6.503275 seconds
	I0921 22:26:01.192289  276511 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0921 22:26:01.192460  276511 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0921 22:26:01.192545  276511 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs
	I0921 22:26:01.192839  276511 kubeadm.go:317] [mark-control-plane] Marking the node no-preload-20220921220832-10174 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0921 22:26:01.192906  276511 kubeadm.go:317] [bootstrap-token] Using token: 9ldpwz.b05pw96cyce3l1nr
	I0921 22:26:01.194593  276511 out.go:204]   - Configuring RBAC rules ...
	I0921 22:26:01.194724  276511 kubeadm.go:317] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0921 22:26:01.194852  276511 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0921 22:26:01.195058  276511 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0921 22:26:01.195234  276511 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0921 22:26:01.195387  276511 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0921 22:26:01.195500  276511 kubeadm.go:317] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0921 22:26:01.195644  276511 kubeadm.go:317] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0921 22:26:01.195703  276511 kubeadm.go:317] [addons] Applied essential addon: CoreDNS
	I0921 22:26:01.195765  276511 kubeadm.go:317] [addons] Applied essential addon: kube-proxy
	I0921 22:26:01.195777  276511 kubeadm.go:317] 
	I0921 22:26:01.195861  276511 kubeadm.go:317] Your Kubernetes control-plane has initialized successfully!
	I0921 22:26:01.195872  276511 kubeadm.go:317] 
	I0921 22:26:01.195980  276511 kubeadm.go:317] To start using your cluster, you need to run the following as a regular user:
	I0921 22:26:01.196004  276511 kubeadm.go:317] 
	I0921 22:26:01.196036  276511 kubeadm.go:317]   mkdir -p $HOME/.kube
	I0921 22:26:01.196117  276511 kubeadm.go:317]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0921 22:26:01.196194  276511 kubeadm.go:317]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0921 22:26:01.196207  276511 kubeadm.go:317] 
	I0921 22:26:01.196286  276511 kubeadm.go:317] Alternatively, if you are the root user, you can run:
	I0921 22:26:01.196303  276511 kubeadm.go:317] 
	I0921 22:26:01.196379  276511 kubeadm.go:317]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0921 22:26:01.196404  276511 kubeadm.go:317] 
	I0921 22:26:01.196485  276511 kubeadm.go:317] You should now deploy a pod network to the cluster.
	I0921 22:26:01.196595  276511 kubeadm.go:317] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0921 22:26:01.196694  276511 kubeadm.go:317]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0921 22:26:01.196706  276511 kubeadm.go:317] 
	I0921 22:26:01.196820  276511 kubeadm.go:317] You can now join any number of control-plane nodes by copying certificate authorities
	I0921 22:26:01.196920  276511 kubeadm.go:317] and service account keys on each node and then running the following as root:
	I0921 22:26:01.196931  276511 kubeadm.go:317] 
	I0921 22:26:01.197032  276511 kubeadm.go:317]   kubeadm join control-plane.minikube.internal:8443 --token 9ldpwz.b05pw96cyce3l1nr \
	I0921 22:26:01.197181  276511 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:b419e756666ce2e30bdac31039f21ad7242d26e2b9c03a55c5bc6fe8dcfddcf7 \
	I0921 22:26:01.197220  276511 kubeadm.go:317] 	--control-plane 
	I0921 22:26:01.197231  276511 kubeadm.go:317] 
	I0921 22:26:01.197362  276511 kubeadm.go:317] Then you can join any number of worker nodes by running the following on each as root:
	I0921 22:26:01.197381  276511 kubeadm.go:317] 
	I0921 22:26:01.197495  276511 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8443 --token 9ldpwz.b05pw96cyce3l1nr \
	I0921 22:26:01.197628  276511 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:b419e756666ce2e30bdac31039f21ad7242d26e2b9c03a55c5bc6fe8dcfddcf7 
	I0921 22:26:01.197660  276511 cni.go:95] Creating CNI manager for ""
	I0921 22:26:01.197674  276511 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0921 22:26:01.199797  276511 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0921 22:25:59.039749  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:01.040507  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:01.201405  276511 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0921 22:26:01.205181  276511 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.2/kubectl ...
	I0921 22:26:01.205199  276511 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0921 22:26:01.218971  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0921 22:25:58.789397  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:26:00.789911  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:26:03.540344  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:06.039881  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:02.006490  276511 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0921 22:26:02.006560  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:02.006575  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl label nodes minikube.k8s.io/version=v1.27.0 minikube.k8s.io/commit=937c68716dfaac5b5ffa3b6655158d5d3472b8c4 minikube.k8s.io/name=no-preload-20220921220832-10174 minikube.k8s.io/updated_at=2022_09_21T22_26_02_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:02.013858  276511 ops.go:34] apiserver oom_adj: -16
	I0921 22:26:02.099832  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:02.694112  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:03.194089  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:03.693535  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:04.193854  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:04.693713  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:05.194101  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:05.694288  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:06.193619  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:06.693501  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:03.289345  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:26:05.789183  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:26:08.040230  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:10.539463  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:07.193590  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:07.693901  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:08.194072  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:08.694197  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:09.193914  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:09.693488  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:10.194416  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:10.693496  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:11.194435  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:11.694097  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:08.289258  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:26:10.789536  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:26:12.790035  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:26:12.194461  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:12.694279  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:13.193818  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:13.693711  276511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:26:13.758985  276511 kubeadm.go:1067] duration metric: took 11.752476269s to wait for elevateKubeSystemPrivileges.
	I0921 22:26:13.759013  276511 kubeadm.go:398] StartCluster complete in 4m35.198807914s
	I0921 22:26:13.759030  276511 settings.go:142] acquiring lock: {Name:mk2e017b0c75e33bad2b3a546d00214b38a21694 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:26:13.759144  276511 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
	I0921 22:26:13.760661  276511 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig: {Name:mkdec5faf1bb182b50a3cd5458b352f38075dc15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:26:14.276964  276511 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "no-preload-20220921220832-10174" rescaled to 1
	I0921 22:26:14.277021  276511 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0921 22:26:14.277060  276511 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0921 22:26:14.279846  276511 out.go:177] * Verifying Kubernetes components...
	I0921 22:26:14.277154  276511 addons.go:412] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0921 22:26:14.277306  276511 config.go:180] Loaded profile config "no-preload-20220921220832-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.2
	I0921 22:26:14.281313  276511 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0921 22:26:14.281349  276511 addons.go:65] Setting default-storageclass=true in profile "no-preload-20220921220832-10174"
	I0921 22:26:14.281359  276511 addons.go:65] Setting storage-provisioner=true in profile "no-preload-20220921220832-10174"
	I0921 22:26:14.281373  276511 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-20220921220832-10174"
	I0921 22:26:14.281387  276511 addons.go:65] Setting metrics-server=true in profile "no-preload-20220921220832-10174"
	I0921 22:26:14.281397  276511 addons.go:65] Setting dashboard=true in profile "no-preload-20220921220832-10174"
	I0921 22:26:14.281436  276511 addons.go:153] Setting addon dashboard=true in "no-preload-20220921220832-10174"
	W0921 22:26:14.281450  276511 addons.go:162] addon dashboard should already be in state true
	I0921 22:26:14.281497  276511 host.go:66] Checking if "no-preload-20220921220832-10174" exists ...
	I0921 22:26:14.281400  276511 addons.go:153] Setting addon metrics-server=true in "no-preload-20220921220832-10174"
	W0921 22:26:14.281576  276511 addons.go:162] addon metrics-server should already be in state true
	I0921 22:26:14.281377  276511 addons.go:153] Setting addon storage-provisioner=true in "no-preload-20220921220832-10174"
	I0921 22:26:14.281640  276511 host.go:66] Checking if "no-preload-20220921220832-10174" exists ...
	W0921 22:26:14.281653  276511 addons.go:162] addon storage-provisioner should already be in state true
	I0921 22:26:14.281684  276511 host.go:66] Checking if "no-preload-20220921220832-10174" exists ...
	I0921 22:26:14.281727  276511 cli_runner.go:164] Run: docker container inspect no-preload-20220921220832-10174 --format={{.State.Status}}
	I0921 22:26:14.282004  276511 cli_runner.go:164] Run: docker container inspect no-preload-20220921220832-10174 --format={{.State.Status}}
	I0921 22:26:14.282138  276511 cli_runner.go:164] Run: docker container inspect no-preload-20220921220832-10174 --format={{.State.Status}}
	I0921 22:26:14.282139  276511 cli_runner.go:164] Run: docker container inspect no-preload-20220921220832-10174 --format={{.State.Status}}
	I0921 22:26:14.321366  276511 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0921 22:26:14.323218  276511 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0921 22:26:14.323243  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0921 22:26:14.323321  276511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220832-10174
	I0921 22:26:14.323433  276511 addons.go:153] Setting addon default-storageclass=true in "no-preload-20220921220832-10174"
	W0921 22:26:14.323452  276511 addons.go:162] addon default-storageclass should already be in state true
	I0921 22:26:14.323478  276511 host.go:66] Checking if "no-preload-20220921220832-10174" exists ...
	I0921 22:26:14.323995  276511 cli_runner.go:164] Run: docker container inspect no-preload-20220921220832-10174 --format={{.State.Status}}
	I0921 22:26:14.331074  276511 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.6.0
	I0921 22:26:14.333243  276511 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0921 22:26:14.335670  276511 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0921 22:26:14.335699  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0921 22:26:14.335828  276511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220832-10174
	I0921 22:26:14.338700  276511 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0921 22:26:12.540251  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:15.040305  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:14.339971  276511 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0921 22:26:14.339996  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0921 22:26:14.340067  276511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220832-10174
	I0921 22:26:14.357088  276511 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0921 22:26:14.357118  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0921 22:26:14.357179  276511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220921220832-10174
	I0921 22:26:14.363845  276511 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49438 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/no-preload-20220921220832-10174/id_rsa Username:docker}
	I0921 22:26:14.373248  276511 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49438 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/no-preload-20220921220832-10174/id_rsa Username:docker}
	I0921 22:26:14.374001  276511 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49438 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/no-preload-20220921220832-10174/id_rsa Username:docker}
	I0921 22:26:14.403584  276511 node_ready.go:35] waiting up to 6m0s for node "no-preload-20220921220832-10174" to be "Ready" ...
	I0921 22:26:14.403673  276511 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0921 22:26:14.403710  276511 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49438 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/no-preload-20220921220832-10174/id_rsa Username:docker}
	I0921 22:26:14.597706  276511 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0921 22:26:14.597740  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0921 22:26:14.598185  276511 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0921 22:26:14.598208  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0921 22:26:14.678717  276511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0921 22:26:14.691157  276511 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0921 22:26:14.691190  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0921 22:26:14.776824  276511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0921 22:26:14.780103  276511 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0921 22:26:14.780131  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0921 22:26:14.796772  276511 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0921 22:26:14.796802  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0921 22:26:14.877240  276511 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0921 22:26:14.877270  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0921 22:26:14.886529  276511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0921 22:26:14.982072  276511 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0921 22:26:14.982106  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4206 bytes)
	I0921 22:26:15.083042  276511 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0921 22:26:15.083073  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0921 22:26:15.185025  276511 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0921 22:26:15.185058  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0921 22:26:15.288358  276511 start.go:810] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS
	I0921 22:26:15.295798  276511 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0921 22:26:15.295830  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0921 22:26:15.390667  276511 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0921 22:26:15.390693  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0921 22:26:15.415462  276511 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0921 22:26:15.415496  276511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0921 22:26:15.492343  276511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0921 22:26:15.887638  276511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.208874575s)
	I0921 22:26:15.887703  276511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.110843194s)
	I0921 22:26:15.982100  276511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.095511944s)
	I0921 22:26:15.982142  276511 addons.go:383] Verifying addon metrics-server=true in "no-preload-20220921220832-10174"
	I0921 22:26:16.410487  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:16.706261  276511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.213866962s)
	I0921 22:26:16.708800  276511 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0921 22:26:16.709899  276511 addons.go:414] enableAddons completed in 2.432760887s
	I0921 22:26:15.290491  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:26:17.789818  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:26:17.539620  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:20.039549  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:18.911099  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:21.409684  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:20.289226  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:26:22.292776  265259 node_ready.go:58] node "embed-certs-20220921220439-10174" has status "Ready":"False"
	I0921 22:26:22.292799  265259 node_ready.go:38] duration metric: took 4m0.017444735s waiting for node "embed-certs-20220921220439-10174" to be "Ready" ...
	I0921 22:26:22.294631  265259 out.go:177] 
	W0921 22:26:22.296115  265259 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0921 22:26:22.296143  265259 out.go:239] * 
	W0921 22:26:22.296927  265259 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0921 22:26:22.298511  265259 out.go:177] 
	I0921 22:26:22.539641  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:25.039622  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:23.410505  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:25.909606  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:27.539385  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:29.539878  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:31.540249  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:27.910578  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:30.410429  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:33.540339  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:35.541025  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:32.910296  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:34.911081  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:38.039663  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:40.539522  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:37.410360  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:39.410436  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:42.540000  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:45.040231  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:41.909862  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:43.910310  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:46.409644  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:47.540283  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:50.039510  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:48.410566  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:50.410732  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:52.039949  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:54.540144  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:52.910395  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:54.910495  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:57.039966  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:59.040209  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:01.539473  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:26:57.409907  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:26:59.410288  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:03.540044  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:06.040183  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:01.910153  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:04.409817  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:06.410562  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:08.040423  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:10.539873  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:08.910302  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:11.410571  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:13.039961  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:15.040246  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:13.909964  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:15.910369  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:17.539604  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:19.539765  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:18.410585  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:20.910125  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:22.040021  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:24.539835  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:26.540240  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:22.910441  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:25.410069  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:28.540555  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:31.039426  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:27.410438  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:29.410512  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:33.040327  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:35.040601  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:31.910290  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:34.409802  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:37.540256  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:40.039584  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:36.909982  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:39.409679  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:41.410245  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:42.539492  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:44.539613  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:46.540433  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:43.909863  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:45.910696  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:49.039750  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:51.040314  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:48.410147  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:50.410237  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:53.040407  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:55.540422  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:52.910535  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:55.410601  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:58.040486  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:28:00.540148  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:27:57.910322  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:27:59.910846  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:03.039402  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:28:05.040045  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:28:02.410370  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:04.410513  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:07.040112  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:28:09.539484  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:28:11.539916  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:28:06.910328  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:09.409926  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:11.410618  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:14.040357  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:28:16.040410  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:28:13.909830  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:15.910746  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:18.539390  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:28:20.539944  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:28:18.409773  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:20.410208  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:22.540064  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:28:25.039880  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:28:22.410702  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:24.909931  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:27.539325  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:28:29.540282  283599 pod_ready.go:102] pod "coredns-565d847f94-mrkjn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-21 22:11:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0921 22:28:30.037464  283599 pod_ready.go:81] duration metric: took 4m0.003103432s waiting for pod "coredns-565d847f94-mrkjn" in "kube-system" namespace to be "Ready" ...
	E0921 22:28:30.037491  283599 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-565d847f94-mrkjn" in "kube-system" namespace to be "Ready" (will not retry!)
	I0921 22:28:30.037512  283599 pod_ready.go:38] duration metric: took 4m0.007931264s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0921 22:28:30.037542  283599 kubeadm.go:631] restartCluster took 4m11.484871611s
	W0921 22:28:30.037694  283599 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0921 22:28:30.037731  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0921 22:28:26.910183  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:28.910722  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:31.410255  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:32.836415  283599 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (2.798662315s)
	I0921 22:28:32.836470  283599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0921 22:28:32.846281  283599 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0921 22:28:32.853286  283599 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0921 22:28:32.853347  283599 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0921 22:28:32.860321  283599 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0921 22:28:32.860372  283599 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0921 22:28:32.899444  283599 kubeadm.go:317] [init] Using Kubernetes version: v1.25.2
	I0921 22:28:32.899530  283599 kubeadm.go:317] [preflight] Running pre-flight checks
	I0921 22:28:32.927597  283599 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
	I0921 22:28:32.927684  283599 kubeadm.go:317] KERNEL_VERSION: 5.15.0-1017-gcp
	I0921 22:28:32.927762  283599 kubeadm.go:317] OS: Linux
	I0921 22:28:32.927817  283599 kubeadm.go:317] CGROUPS_CPU: enabled
	I0921 22:28:32.927857  283599 kubeadm.go:317] CGROUPS_CPUACCT: enabled
	I0921 22:28:32.927895  283599 kubeadm.go:317] CGROUPS_CPUSET: enabled
	I0921 22:28:32.927957  283599 kubeadm.go:317] CGROUPS_DEVICES: enabled
	I0921 22:28:32.928004  283599 kubeadm.go:317] CGROUPS_FREEZER: enabled
	I0921 22:28:32.928045  283599 kubeadm.go:317] CGROUPS_MEMORY: enabled
	I0921 22:28:32.928083  283599 kubeadm.go:317] CGROUPS_PIDS: enabled
	I0921 22:28:32.928121  283599 kubeadm.go:317] CGROUPS_HUGETLB: enabled
	I0921 22:28:32.928158  283599 kubeadm.go:317] CGROUPS_BLKIO: enabled
	I0921 22:28:32.994267  283599 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0921 22:28:32.994393  283599 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0921 22:28:32.994471  283599 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0921 22:28:33.113433  283599 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0921 22:28:33.118993  283599 out.go:204]   - Generating certificates and keys ...
	I0921 22:28:33.119145  283599 kubeadm.go:317] [certs] Using existing ca certificate authority
	I0921 22:28:33.119247  283599 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I0921 22:28:33.119310  283599 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0921 22:28:33.119362  283599 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I0921 22:28:33.119432  283599 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I0921 22:28:33.119501  283599 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I0921 22:28:33.119554  283599 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I0921 22:28:33.119605  283599 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I0921 22:28:33.119666  283599 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0921 22:28:33.119759  283599 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0921 22:28:33.119797  283599 kubeadm.go:317] [certs] Using the existing "sa" key
	I0921 22:28:33.119873  283599 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0921 22:28:33.240892  283599 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0921 22:28:33.319256  283599 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0921 22:28:33.514290  283599 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0921 22:28:33.579294  283599 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0921 22:28:33.591185  283599 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0921 22:28:33.591951  283599 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0921 22:28:33.592077  283599 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I0921 22:28:33.671909  283599 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0921 22:28:33.674209  283599 out.go:204]   - Booting up control plane ...
	I0921 22:28:33.674356  283599 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0921 22:28:33.674478  283599 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0921 22:28:33.675328  283599 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0921 22:28:33.677339  283599 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0921 22:28:33.679453  283599 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0921 22:28:33.410335  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:35.410708  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:40.182528  283599 kubeadm.go:317] [apiclient] All control plane components are healthy after 6.502979 seconds
	I0921 22:28:40.182719  283599 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0921 22:28:40.191775  283599 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0921 22:28:40.708308  283599 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs
	I0921 22:28:40.708506  283599 kubeadm.go:317] [mark-control-plane] Marking the node default-k8s-different-port-20220921221118-10174 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0921 22:28:41.216221  283599 kubeadm.go:317] [bootstrap-token] Using token: 7zktge.i7kw817sdpmpqput
	I0921 22:28:41.217917  283599 out.go:204]   - Configuring RBAC rules ...
	I0921 22:28:41.218062  283599 kubeadm.go:317] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0921 22:28:41.221048  283599 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0921 22:28:41.225663  283599 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0921 22:28:41.227873  283599 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0921 22:28:41.229840  283599 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0921 22:28:41.231693  283599 kubeadm.go:317] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0921 22:28:41.238509  283599 kubeadm.go:317] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0921 22:28:41.448788  283599 kubeadm.go:317] [addons] Applied essential addon: CoreDNS
	I0921 22:28:41.684596  283599 kubeadm.go:317] [addons] Applied essential addon: kube-proxy
	I0921 22:28:41.686021  283599 kubeadm.go:317] 
	I0921 22:28:41.686112  283599 kubeadm.go:317] Your Kubernetes control-plane has initialized successfully!
	I0921 22:28:41.686121  283599 kubeadm.go:317] 
	I0921 22:28:41.686213  283599 kubeadm.go:317] To start using your cluster, you need to run the following as a regular user:
	I0921 22:28:41.686221  283599 kubeadm.go:317] 
	I0921 22:28:41.686253  283599 kubeadm.go:317]   mkdir -p $HOME/.kube
	I0921 22:28:41.687200  283599 kubeadm.go:317]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0921 22:28:41.687275  283599 kubeadm.go:317]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0921 22:28:41.687282  283599 kubeadm.go:317] 
	I0921 22:28:41.687347  283599 kubeadm.go:317] Alternatively, if you are the root user, you can run:
	I0921 22:28:41.687361  283599 kubeadm.go:317] 
	I0921 22:28:41.687420  283599 kubeadm.go:317]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0921 22:28:41.687443  283599 kubeadm.go:317] 
	I0921 22:28:41.687516  283599 kubeadm.go:317] You should now deploy a pod network to the cluster.
	I0921 22:28:41.687626  283599 kubeadm.go:317] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0921 22:28:41.687754  283599 kubeadm.go:317]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0921 22:28:41.687768  283599 kubeadm.go:317] 
	I0921 22:28:41.687856  283599 kubeadm.go:317] You can now join any number of control-plane nodes by copying certificate authorities
	I0921 22:28:41.687945  283599 kubeadm.go:317] and service account keys on each node and then running the following as root:
	I0921 22:28:41.687952  283599 kubeadm.go:317] 
	I0921 22:28:41.688054  283599 kubeadm.go:317]   kubeadm join control-plane.minikube.internal:8444 --token 7zktge.i7kw817sdpmpqput \
	I0921 22:28:41.688176  283599 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:b419e756666ce2e30bdac31039f21ad7242d26e2b9c03a55c5bc6fe8dcfddcf7 \
	I0921 22:28:41.688202  283599 kubeadm.go:317] 	--control-plane 
	I0921 22:28:41.688207  283599 kubeadm.go:317] 
	I0921 22:28:41.688304  283599 kubeadm.go:317] Then you can join any number of worker nodes by running the following on each as root:
	I0921 22:28:41.688309  283599 kubeadm.go:317] 
	I0921 22:28:41.688403  283599 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8444 --token 7zktge.i7kw817sdpmpqput \
	I0921 22:28:41.688525  283599 kubeadm.go:317] 	--discovery-token-ca-cert-hash sha256:b419e756666ce2e30bdac31039f21ad7242d26e2b9c03a55c5bc6fe8dcfddcf7 
	I0921 22:28:41.691473  283599 kubeadm.go:317] W0921 22:28:32.894416    3309 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
	I0921 22:28:41.691806  283599 kubeadm.go:317] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1017-gcp\n", err: exit status 1
	I0921 22:28:41.691944  283599 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0921 22:28:41.691973  283599 cni.go:95] Creating CNI manager for ""
	I0921 22:28:41.691983  283599 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0921 22:28:41.694185  283599 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0921 22:28:37.910661  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:40.410644  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:41.695783  283599 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0921 22:28:41.699760  283599 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.2/kubectl ...
	I0921 22:28:41.699784  283599 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0921 22:28:41.776183  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0921 22:28:42.446104  283599 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0921 22:28:42.446180  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:42.446216  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl label nodes minikube.k8s.io/version=v1.27.0 minikube.k8s.io/commit=937c68716dfaac5b5ffa3b6655158d5d3472b8c4 minikube.k8s.io/name=default-k8s-different-port-20220921221118-10174 minikube.k8s.io/updated_at=2022_09_21T22_28_42_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:42.524814  283599 ops.go:34] apiserver oom_adj: -16
	I0921 22:28:42.524918  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:43.099884  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:43.600017  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:44.099303  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:44.599933  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:45.100173  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:45.599961  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:46.099843  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:46.599840  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:42.910093  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:44.910463  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:47.099465  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:47.599512  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:48.099998  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:48.599598  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:49.099840  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:49.599433  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:50.099931  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:50.599355  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:51.099363  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:51.599865  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:47.410019  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:49.410428  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:51.410461  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:52.099400  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:52.600056  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:53.100255  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:53.599772  283599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0921 22:28:53.668975  283599 kubeadm.go:1067] duration metric: took 11.222848116s to wait for elevateKubeSystemPrivileges.
	I0921 22:28:53.669016  283599 kubeadm.go:398] StartCluster complete in 4m35.159165946s
	I0921 22:28:53.669039  283599 settings.go:142] acquiring lock: {Name:mk2e017b0c75e33bad2b3a546d00214b38a21694 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:28:53.669157  283599 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
	I0921 22:28:53.670820  283599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig: {Name:mkdec5faf1bb182b50a3cd5458b352f38075dc15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 22:28:54.187769  283599 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20220921221118-10174" rescaled to 1
	I0921 22:28:54.187839  283599 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0921 22:28:54.187870  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0921 22:28:54.187894  283599 addons.go:412] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0921 22:28:54.190631  283599 out.go:177] * Verifying Kubernetes components...
	I0921 22:28:54.187957  283599 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-different-port-20220921221118-10174"
	I0921 22:28:54.187964  283599 addons.go:65] Setting dashboard=true in profile "default-k8s-different-port-20220921221118-10174"
	I0921 22:28:54.187970  283599 addons.go:65] Setting default-storageclass=true in profile "default-k8s-different-port-20220921221118-10174"
	I0921 22:28:54.188002  283599 addons.go:65] Setting metrics-server=true in profile "default-k8s-different-port-20220921221118-10174"
	I0921 22:28:54.188076  283599 config.go:180] Loaded profile config "default-k8s-different-port-20220921221118-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.2
	I0921 22:28:54.192035  283599 addons.go:153] Setting addon storage-provisioner=true in "default-k8s-different-port-20220921221118-10174"
	W0921 22:28:54.192079  283599 addons.go:162] addon storage-provisioner should already be in state true
	I0921 22:28:54.192091  283599 addons.go:153] Setting addon dashboard=true in "default-k8s-different-port-20220921221118-10174"
	W0921 22:28:54.192114  283599 addons.go:162] addon dashboard should already be in state true
	I0921 22:28:54.192162  283599 host.go:66] Checking if "default-k8s-different-port-20220921221118-10174" exists ...
	I0921 22:28:54.192210  283599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0921 22:28:54.192299  283599 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20220921221118-10174"
	I0921 22:28:54.192580  283599 addons.go:153] Setting addon metrics-server=true in "default-k8s-different-port-20220921221118-10174"
	W0921 22:28:54.192616  283599 addons.go:162] addon metrics-server should already be in state true
	I0921 22:28:54.192633  283599 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221118-10174 --format={{.State.Status}}
	I0921 22:28:54.192666  283599 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221118-10174 --format={{.State.Status}}
	I0921 22:28:54.192163  283599 host.go:66] Checking if "default-k8s-different-port-20220921221118-10174" exists ...
	I0921 22:28:54.192666  283599 host.go:66] Checking if "default-k8s-different-port-20220921221118-10174" exists ...
	I0921 22:28:54.193362  283599 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221118-10174 --format={{.State.Status}}
	I0921 22:28:54.193439  283599 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221118-10174 --format={{.State.Status}}
	I0921 22:28:54.234974  283599 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0921 22:28:54.236667  283599 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0921 22:28:54.236692  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0921 22:28:54.236745  283599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:28:54.240000  283599 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.6.0
	I0921 22:28:54.239390  283599 addons.go:153] Setting addon default-storageclass=true in "default-k8s-different-port-20220921221118-10174"
	I0921 22:28:54.241874  283599 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0921 22:28:54.244335  283599 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0921 22:28:54.244363  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	W0921 22:28:54.241874  283599 addons.go:162] addon default-storageclass should already be in state true
	I0921 22:28:54.244424  283599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:28:54.244454  283599 host.go:66] Checking if "default-k8s-different-port-20220921221118-10174" exists ...
	I0921 22:28:54.244956  283599 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220921221118-10174 --format={{.State.Status}}
	I0921 22:28:54.246658  283599 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0921 22:28:54.248082  283599 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0921 22:28:54.248109  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0921 22:28:54.248165  283599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:28:54.272909  283599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49443 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa Username:docker}
	I0921 22:28:54.273873  283599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49443 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa Username:docker}
	I0921 22:28:54.277163  283599 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0921 22:28:54.277186  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0921 22:28:54.277236  283599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220921221118-10174
	I0921 22:28:54.290041  283599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49443 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa Username:docker}
	I0921 22:28:54.318706  283599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49443 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/default-k8s-different-port-20220921221118-10174/id_rsa Username:docker}
	I0921 22:28:54.398932  283599 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20220921221118-10174" to be "Ready" ...
	I0921 22:28:54.399014  283599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0921 22:28:54.496523  283599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0921 22:28:54.498431  283599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0921 22:28:54.499591  283599 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0921 22:28:54.499650  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0921 22:28:54.501640  283599 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0921 22:28:54.501663  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0921 22:28:54.594519  283599 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0921 22:28:54.594561  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0921 22:28:54.596768  283599 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0921 22:28:54.596847  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0921 22:28:54.690036  283599 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0921 22:28:54.690071  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0921 22:28:54.700119  283599 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0921 22:28:54.700197  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0921 22:28:54.876320  283599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0921 22:28:54.883544  283599 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0921 22:28:54.883571  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4206 bytes)
	I0921 22:28:54.977006  283599 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0921 22:28:54.977040  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0921 22:28:55.079240  283599 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0921 22:28:55.079273  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0921 22:28:55.176309  283599 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0921 22:28:55.176344  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0921 22:28:55.276282  283599 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0921 22:28:55.276317  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0921 22:28:55.379016  283599 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0921 22:28:55.379044  283599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0921 22:28:55.386242  283599 start.go:810] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS
	I0921 22:28:55.399129  283599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0921 22:28:55.595061  283599 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.098450009s)
	I0921 22:28:55.786581  283599 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.288109437s)
	I0921 22:28:56.081753  283599 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.205376891s)
	I0921 22:28:56.081804  283599 addons.go:383] Verifying addon metrics-server=true in "default-k8s-different-port-20220921221118-10174"
	I0921 22:28:56.387178  283599 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0921 22:28:56.388690  283599 addons.go:414] enableAddons completed in 2.200797183s
	I0921 22:28:56.404853  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:28:53.909716  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:55.910611  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:28:58.405031  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:00.405582  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:28:58.409630  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:00.410447  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:02.905572  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:05.405473  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:02.910338  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:05.410066  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:07.904364  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:09.905589  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:07.910279  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:10.410127  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:12.405034  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:14.905741  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:12.910452  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:15.410553  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:17.404952  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:19.405175  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:21.405392  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:17.910479  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:20.410559  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:23.405620  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:25.905592  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:22.909898  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:24.910567  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:27.905775  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:30.405483  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:27.410039  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:29.410131  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:31.410291  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:32.904863  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:35.404709  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:33.910459  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:36.410445  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:37.905690  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:40.405229  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:38.910532  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:41.409671  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:42.905360  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:44.905907  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:43.410422  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:45.910511  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:47.404631  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:49.405402  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:48.409951  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:50.410363  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:51.904997  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:53.905667  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:56.405228  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:52.411261  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:54.910318  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:58.405705  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:00.905348  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:29:57.409683  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:29:59.410335  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:30:03.404779  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:05.404833  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:01.909994  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:30:03.910230  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:30:06.410036  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:30:07.405804  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:09.904912  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:08.909550  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:30:10.910475  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:30:13.409889  276511 node_ready.go:58] node "no-preload-20220921220832-10174" has status "Ready":"False"
	I0921 22:30:14.413229  276511 node_ready.go:38] duration metric: took 4m0.009606009s waiting for node "no-preload-20220921220832-10174" to be "Ready" ...
	I0921 22:30:14.416209  276511 out.go:177] 
	W0921 22:30:14.417896  276511 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0921 22:30:14.417916  276511 out.go:239] * 
	W0921 22:30:14.418711  276511 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0921 22:30:14.420798  276511 out.go:177] 
	I0921 22:30:11.905117  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:13.905422  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:15.906020  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:18.404644  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:20.404682  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:22.405540  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:24.905233  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:27.404679  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:29.904692  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:31.905266  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:34.405088  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:36.405476  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:38.904414  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:40.905386  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:43.404507  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:45.405356  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:47.904571  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:50.405311  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:52.904564  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:54.905119  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:57.405076  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:30:59.405121  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:01.904816  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:03.905408  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:05.905565  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:08.404718  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:10.405173  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:12.905041  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:14.905498  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:17.405656  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:19.905667  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:22.405514  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:24.904738  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:27.404689  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:29.405353  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:31.904926  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:34.405471  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:36.905606  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:39.404550  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:41.405513  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:43.905655  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:46.405308  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:48.405699  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:50.905270  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:53.405205  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:55.405540  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:31:57.905798  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:00.405370  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:02.405480  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:04.904649  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:06.905338  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:09.404845  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:11.405472  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:13.905469  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:16.405211  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:18.405365  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:20.904698  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:23.405458  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:25.905299  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:27.905466  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:29.905633  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:32.404583  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:34.404795  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:36.405323  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:38.405395  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:40.904581  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:42.905533  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:45.405100  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:47.405337  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:49.405417  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:51.905042  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:54.404654  283599 node_ready.go:58] node "default-k8s-different-port-20220921221118-10174" has status "Ready":"False"
	I0921 22:32:54.406831  283599 node_ready.go:38] duration metric: took 4m0.00786279s waiting for node "default-k8s-different-port-20220921221118-10174" to be "Ready" ...
	I0921 22:32:54.409456  283599 out.go:177] 
	W0921 22:32:54.411031  283599 out.go:239] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0921 22:32:54.411055  283599 out.go:239] * 
	W0921 22:32:54.411890  283599 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0921 22:32:54.413449  283599 out.go:177] 
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	4540ded2a6689       d921cee849482       About a minute ago   Running             kindnet-cni               4                   822a3d4f3d26d
	aa697a6cee524       d921cee849482       4 minutes ago        Exited              kindnet-cni               3                   822a3d4f3d26d
	118685bf1243c       1c7d8c51823b5       13 minutes ago       Running             kube-proxy                0                   aab630e852c3d
	943813747ec76       ca0ea1ee3cfd3       13 minutes ago       Running             kube-scheduler            2                   6d3cefcf67297
	be18d7989d5cc       dbfceb93c69b6       13 minutes ago       Running             kube-controller-manager   2                   68cd08f28ec26
	b70eedeefc82f       a8a176a5d5d69       13 minutes ago       Running             etcd                      2                   0f5414f375eea
	a2c10538d6c16       97801f8394908       13 minutes ago       Running             kube-apiserver            2                   741ea276ae553
	
	* 
	* ==> containerd <==
	* -- Logs begin at Wed 2022-09-21 22:24:02 UTC, end at Wed 2022-09-21 22:41:57 UTC. --
	Sep 21 22:34:17 default-k8s-different-port-20220921221118-10174 containerd[386]: time="2022-09-21T22:34:17.489076528Z" level=info msg="RemoveContainer for \"3b59972efe126124836d07dd686baceb64dbbf348cc964a10b75be0d06e64c90\" returns successfully"
	Sep 21 22:34:28 default-k8s-different-port-20220921221118-10174 containerd[386]: time="2022-09-21T22:34:28.703514832Z" level=info msg="CreateContainer within sandbox \"822a3d4f3d26d63ab953201b14a329bbcb111b9188a5ef241a4a7c362712ff08\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:2,}"
	Sep 21 22:34:28 default-k8s-different-port-20220921221118-10174 containerd[386]: time="2022-09-21T22:34:28.716201001Z" level=info msg="CreateContainer within sandbox \"822a3d4f3d26d63ab953201b14a329bbcb111b9188a5ef241a4a7c362712ff08\" for &ContainerMetadata{Name:kindnet-cni,Attempt:2,} returns container id \"50eae4cda3be9a752c597a633a1b13a0678af7e941d8153d377a632cbbb30ad6\""
	Sep 21 22:34:28 default-k8s-different-port-20220921221118-10174 containerd[386]: time="2022-09-21T22:34:28.716706179Z" level=info msg="StartContainer for \"50eae4cda3be9a752c597a633a1b13a0678af7e941d8153d377a632cbbb30ad6\""
	Sep 21 22:34:28 default-k8s-different-port-20220921221118-10174 containerd[386]: time="2022-09-21T22:34:28.792494350Z" level=info msg="StartContainer for \"50eae4cda3be9a752c597a633a1b13a0678af7e941d8153d377a632cbbb30ad6\" returns successfully"
	Sep 21 22:37:09 default-k8s-different-port-20220921221118-10174 containerd[386]: time="2022-09-21T22:37:09.218409245Z" level=info msg="shim disconnected" id=50eae4cda3be9a752c597a633a1b13a0678af7e941d8153d377a632cbbb30ad6
	Sep 21 22:37:09 default-k8s-different-port-20220921221118-10174 containerd[386]: time="2022-09-21T22:37:09.218482726Z" level=warning msg="cleaning up after shim disconnected" id=50eae4cda3be9a752c597a633a1b13a0678af7e941d8153d377a632cbbb30ad6 namespace=k8s.io
	Sep 21 22:37:09 default-k8s-different-port-20220921221118-10174 containerd[386]: time="2022-09-21T22:37:09.218496646Z" level=info msg="cleaning up dead shim"
	Sep 21 22:37:09 default-k8s-different-port-20220921221118-10174 containerd[386]: time="2022-09-21T22:37:09.230044349Z" level=warning msg="cleanup warnings time=\"2022-09-21T22:37:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5237 runtime=io.containerd.runc.v2\n"
	Sep 21 22:37:09 default-k8s-different-port-20220921221118-10174 containerd[386]: time="2022-09-21T22:37:09.796902239Z" level=info msg="RemoveContainer for \"667a6666662eb715245e6c49d408f391a61521ef6565e10b53d38b8ac51997e6\""
	Sep 21 22:37:09 default-k8s-different-port-20220921221118-10174 containerd[386]: time="2022-09-21T22:37:09.802313495Z" level=info msg="RemoveContainer for \"667a6666662eb715245e6c49d408f391a61521ef6565e10b53d38b8ac51997e6\" returns successfully"
	Sep 21 22:37:32 default-k8s-different-port-20220921221118-10174 containerd[386]: time="2022-09-21T22:37:32.702587544Z" level=info msg="CreateContainer within sandbox \"822a3d4f3d26d63ab953201b14a329bbcb111b9188a5ef241a4a7c362712ff08\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:3,}"
	Sep 21 22:37:32 default-k8s-different-port-20220921221118-10174 containerd[386]: time="2022-09-21T22:37:32.715206363Z" level=info msg="CreateContainer within sandbox \"822a3d4f3d26d63ab953201b14a329bbcb111b9188a5ef241a4a7c362712ff08\" for &ContainerMetadata{Name:kindnet-cni,Attempt:3,} returns container id \"aa697a6cee524e3186e24ede26b64fbed53528b35dc7a640c229dd06c61fe37d\""
	Sep 21 22:37:32 default-k8s-different-port-20220921221118-10174 containerd[386]: time="2022-09-21T22:37:32.715783208Z" level=info msg="StartContainer for \"aa697a6cee524e3186e24ede26b64fbed53528b35dc7a640c229dd06c61fe37d\""
	Sep 21 22:37:32 default-k8s-different-port-20220921221118-10174 containerd[386]: time="2022-09-21T22:37:32.793539554Z" level=info msg="StartContainer for \"aa697a6cee524e3186e24ede26b64fbed53528b35dc7a640c229dd06c61fe37d\" returns successfully"
	Sep 21 22:40:13 default-k8s-different-port-20220921221118-10174 containerd[386]: time="2022-09-21T22:40:13.221056033Z" level=info msg="shim disconnected" id=aa697a6cee524e3186e24ede26b64fbed53528b35dc7a640c229dd06c61fe37d
	Sep 21 22:40:13 default-k8s-different-port-20220921221118-10174 containerd[386]: time="2022-09-21T22:40:13.221118384Z" level=warning msg="cleaning up after shim disconnected" id=aa697a6cee524e3186e24ede26b64fbed53528b35dc7a640c229dd06c61fe37d namespace=k8s.io
	Sep 21 22:40:13 default-k8s-different-port-20220921221118-10174 containerd[386]: time="2022-09-21T22:40:13.221141271Z" level=info msg="cleaning up dead shim"
	Sep 21 22:40:13 default-k8s-different-port-20220921221118-10174 containerd[386]: time="2022-09-21T22:40:13.230929874Z" level=warning msg="cleanup warnings time=\"2022-09-21T22:40:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5355 runtime=io.containerd.runc.v2\n"
	Sep 21 22:40:14 default-k8s-different-port-20220921221118-10174 containerd[386]: time="2022-09-21T22:40:14.125029534Z" level=info msg="RemoveContainer for \"50eae4cda3be9a752c597a633a1b13a0678af7e941d8153d377a632cbbb30ad6\""
	Sep 21 22:40:14 default-k8s-different-port-20220921221118-10174 containerd[386]: time="2022-09-21T22:40:14.130311254Z" level=info msg="RemoveContainer for \"50eae4cda3be9a752c597a633a1b13a0678af7e941d8153d377a632cbbb30ad6\" returns successfully"
	Sep 21 22:40:57 default-k8s-different-port-20220921221118-10174 containerd[386]: time="2022-09-21T22:40:57.703957540Z" level=info msg="CreateContainer within sandbox \"822a3d4f3d26d63ab953201b14a329bbcb111b9188a5ef241a4a7c362712ff08\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:4,}"
	Sep 21 22:40:57 default-k8s-different-port-20220921221118-10174 containerd[386]: time="2022-09-21T22:40:57.716355736Z" level=info msg="CreateContainer within sandbox \"822a3d4f3d26d63ab953201b14a329bbcb111b9188a5ef241a4a7c362712ff08\" for &ContainerMetadata{Name:kindnet-cni,Attempt:4,} returns container id \"4540ded2a66898594632f5cf04ade62805d05e5b696e83cb94cdb1229a784101\""
	Sep 21 22:40:57 default-k8s-different-port-20220921221118-10174 containerd[386]: time="2022-09-21T22:40:57.716802456Z" level=info msg="StartContainer for \"4540ded2a66898594632f5cf04ade62805d05e5b696e83cb94cdb1229a784101\""
	Sep 21 22:40:57 default-k8s-different-port-20220921221118-10174 containerd[386]: time="2022-09-21T22:40:57.793164991Z" level=info msg="StartContainer for \"4540ded2a66898594632f5cf04ade62805d05e5b696e83cb94cdb1229a784101\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-different-port-20220921221118-10174
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-different-port-20220921221118-10174
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=937c68716dfaac5b5ffa3b6655158d5d3472b8c4
	                    minikube.k8s.io/name=default-k8s-different-port-20220921221118-10174
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_09_21T22_28_42_0700
	                    minikube.k8s.io/version=v1.27.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 21 Sep 2022 22:28:38 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-different-port-20220921221118-10174
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 21 Sep 2022 22:41:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 21 Sep 2022 22:39:04 +0000   Wed, 21 Sep 2022 22:28:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 21 Sep 2022 22:39:04 +0000   Wed, 21 Sep 2022 22:28:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 21 Sep 2022 22:39:04 +0000   Wed, 21 Sep 2022 22:28:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 21 Sep 2022 22:39:04 +0000   Wed, 21 Sep 2022 22:28:35 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-different-port-20220921221118-10174
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871748Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871748Ki
	  pods:               110
	System Info:
	  Machine ID:                 40026c506a9948afaae3b0fda0a61c83
	  System UUID:                15db467d-fd65-4163-8719-8617da0ee9c6
	  Boot ID:                    878f3e16-9143-42f3-b848-1a740c7f26bf
	  Kernel Version:             5.15.0-1017-gcp
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.6.8
	  Kubelet Version:            v1.25.2
	  Kube-Proxy Version:         v1.25.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-default-k8s-different-port-20220921221118-10174                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-ngxwf                                                              100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      13m
	  kube-system                 kube-apiserver-default-k8s-different-port-20220921221118-10174             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-default-k8s-different-port-20220921221118-10174    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-bd9q4                                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-default-k8s-different-port-20220921221118-10174             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 13m   kube-proxy       
	  Normal  Starting                 13m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m   kubelet          Node default-k8s-different-port-20220921221118-10174 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m   kubelet          Node default-k8s-different-port-20220921221118-10174 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m   kubelet          Node default-k8s-different-port-20220921221118-10174 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m   node-controller  Node default-k8s-different-port-20220921221118-10174 event: Registered Node default-k8s-different-port-20220921221118-10174 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000005] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +2.959863] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.003881] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.023897] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[Sep21 22:10] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.005087] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[Sep21 22:11] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +2.967845] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.031851] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.027935] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +2.943864] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.019893] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	[  +1.023889] IPv4: martian source 10.244.0.40 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff c2 90 75 48 df f1 08 06
	
	* 
	* ==> etcd [b70eedeefc82fa3c6b066f602863caf1d1480e05a1a53e90d5e069ccbf264998] <==
	* {"level":"info","ts":"2022-09-21T22:28:35.204Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-09-21T22:28:35.204Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-09-21T22:28:35.204Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-09-21T22:28:35.205Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2022-09-21T22:28:35.205Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2022-09-21T22:28:35.790Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 1"}
	{"level":"info","ts":"2022-09-21T22:28:35.790Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2022-09-21T22:28:35.790Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2022-09-21T22:28:35.790Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2022-09-21T22:28:35.790Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2022-09-21T22:28:35.790Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2022-09-21T22:28:35.790Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2022-09-21T22:28:35.790Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-21T22:28:35.792Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-21T22:28:35.792Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-21T22:28:35.792Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-09-21T22:28:35.792Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:default-k8s-different-port-20220921221118-10174 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2022-09-21T22:28:35.792Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-09-21T22:28:35.792Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-09-21T22:28:35.792Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-09-21T22:28:35.792Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-09-21T22:28:35.793Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-09-21T22:28:35.794Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2022-09-21T22:38:35.807Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":532}
	{"level":"info","ts":"2022-09-21T22:38:35.808Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":532,"took":"444.164µs"}
	
	* 
	* ==> kernel <==
	*  22:41:57 up  1:24,  0 users,  load average: 0.28, 0.25, 0.71
	Linux default-k8s-different-port-20220921221118-10174 5.15.0-1017-gcp #23~20.04.2-Ubuntu SMP Wed Aug 17 02:46:40 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kube-apiserver [a2c10538d6c169ae5a43916a76b1c906600bc179a97452948028596b5d7b1e81] <==
	* W0921 22:36:39.272017       1 handler_proxy.go:105] no RequestInfo found in the context
	E0921 22:36:39.272102       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0921 22:36:39.272122       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0921 22:38:39.275687       1 handler_proxy.go:105] no RequestInfo found in the context
	W0921 22:38:39.275709       1 handler_proxy.go:105] no RequestInfo found in the context
	E0921 22:38:39.275747       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0921 22:38:39.275764       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0921 22:38:39.275817       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0921 22:38:39.276983       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0921 22:39:39.276511       1 handler_proxy.go:105] no RequestInfo found in the context
	E0921 22:39:39.276560       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0921 22:39:39.276571       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0921 22:39:39.277579       1 handler_proxy.go:105] no RequestInfo found in the context
	E0921 22:39:39.277664       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0921 22:39:39.277691       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0921 22:41:39.276747       1 handler_proxy.go:105] no RequestInfo found in the context
	E0921 22:41:39.276791       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0921 22:41:39.276801       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0921 22:41:39.277893       1 handler_proxy.go:105] no RequestInfo found in the context
	E0921 22:41:39.277970       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0921 22:41:39.277989       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [be18d7989d5ccfb21d219bd0e3566a2f3bc927f64d4564f41e26660aced4961e] <==
	* W0921 22:35:53.890588       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0921 22:36:23.509988       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0921 22:36:23.901687       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0921 22:36:53.516667       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0921 22:36:53.913780       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0921 22:37:23.523396       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0921 22:37:23.923237       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0921 22:37:53.529363       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0921 22:37:53.933940       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0921 22:38:23.535420       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0921 22:38:23.944879       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0921 22:38:53.542246       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0921 22:38:53.954540       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0921 22:39:23.548531       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0921 22:39:23.964569       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0921 22:39:53.555001       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0921 22:39:53.975383       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0921 22:40:23.562637       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0921 22:40:23.986817       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0921 22:40:53.569192       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0921 22:40:53.997749       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0921 22:41:23.574257       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0921 22:41:24.007595       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0921 22:41:53.580654       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0921 22:41:54.018057       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [118685bf1243c6ac8c16eec8c30f295521220e8fcf17757f6f81f9e1c5272837] <==
	* I0921 22:28:55.192383       1 node.go:163] Successfully retrieved node IP: 192.168.85.2
	I0921 22:28:55.192477       1 server_others.go:138] "Detected node IP" address="192.168.85.2"
	I0921 22:28:55.192532       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0921 22:28:55.390489       1 server_others.go:206] "Using iptables Proxier"
	I0921 22:28:55.390549       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0921 22:28:55.390563       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0921 22:28:55.390591       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0921 22:28:55.390618       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0921 22:28:55.390806       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0921 22:28:55.391060       1 server.go:661] "Version info" version="v1.25.2"
	I0921 22:28:55.391085       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0921 22:28:55.396059       1 config.go:444] "Starting node config controller"
	I0921 22:28:55.396110       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0921 22:28:55.396697       1 config.go:317] "Starting service config controller"
	I0921 22:28:55.396740       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0921 22:28:55.396775       1 config.go:226] "Starting endpoint slice config controller"
	I0921 22:28:55.396786       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0921 22:28:55.496508       1 shared_informer.go:262] Caches are synced for node config
	I0921 22:28:55.497685       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0921 22:28:55.497701       1 shared_informer.go:262] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [943813747ec7626d34543895ff4fd92fa5c805c9e2a573f3149ec44c228ea93f] <==
	* E0921 22:28:38.297790       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0921 22:28:38.297191       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0921 22:28:38.297821       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0921 22:28:38.297099       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0921 22:28:38.297851       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0921 22:28:38.297406       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0921 22:28:38.298122       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0921 22:28:38.298151       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0921 22:28:38.298521       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0921 22:28:38.298547       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0921 22:28:38.298869       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0921 22:28:38.298961       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0921 22:28:38.299251       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0921 22:28:38.299320       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0921 22:28:39.116527       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0921 22:28:39.116579       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0921 22:28:39.241049       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0921 22:28:39.241093       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0921 22:28:39.376827       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0921 22:28:39.376894       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0921 22:28:39.408191       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0921 22:28:39.408231       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0921 22:28:39.678024       1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0921 22:28:39.678066       1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0921 22:28:41.495874       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2022-09-21 22:24:02 UTC, end at Wed 2022-09-21 22:41:58 UTC. --
	Sep 21 22:40:22 default-k8s-different-port-20220921221118-10174 kubelet[3855]: E0921 22:40:22.074304    3855 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:40:27 default-k8s-different-port-20220921221118-10174 kubelet[3855]: E0921 22:40:27.075464    3855 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:40:28 default-k8s-different-port-20220921221118-10174 kubelet[3855]: I0921 22:40:28.700002    3855 scope.go:115] "RemoveContainer" containerID="aa697a6cee524e3186e24ede26b64fbed53528b35dc7a640c229dd06c61fe37d"
	Sep 21 22:40:28 default-k8s-different-port-20220921221118-10174 kubelet[3855]: E0921 22:40:28.700397    3855 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-ngxwf_kube-system(48e9bf2d-5096-4913-b521-cbc3b0acc973)\"" pod="kube-system/kindnet-ngxwf" podUID=48e9bf2d-5096-4913-b521-cbc3b0acc973
	Sep 21 22:40:32 default-k8s-different-port-20220921221118-10174 kubelet[3855]: E0921 22:40:32.077143    3855 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:40:37 default-k8s-different-port-20220921221118-10174 kubelet[3855]: E0921 22:40:37.078752    3855 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:40:42 default-k8s-different-port-20220921221118-10174 kubelet[3855]: E0921 22:40:42.080326    3855 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:40:43 default-k8s-different-port-20220921221118-10174 kubelet[3855]: I0921 22:40:43.700564    3855 scope.go:115] "RemoveContainer" containerID="aa697a6cee524e3186e24ede26b64fbed53528b35dc7a640c229dd06c61fe37d"
	Sep 21 22:40:43 default-k8s-different-port-20220921221118-10174 kubelet[3855]: E0921 22:40:43.700871    3855 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-ngxwf_kube-system(48e9bf2d-5096-4913-b521-cbc3b0acc973)\"" pod="kube-system/kindnet-ngxwf" podUID=48e9bf2d-5096-4913-b521-cbc3b0acc973
	Sep 21 22:40:47 default-k8s-different-port-20220921221118-10174 kubelet[3855]: E0921 22:40:47.081540    3855 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:40:52 default-k8s-different-port-20220921221118-10174 kubelet[3855]: E0921 22:40:52.083346    3855 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:40:57 default-k8s-different-port-20220921221118-10174 kubelet[3855]: E0921 22:40:57.085080    3855 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:40:57 default-k8s-different-port-20220921221118-10174 kubelet[3855]: I0921 22:40:57.701169    3855 scope.go:115] "RemoveContainer" containerID="aa697a6cee524e3186e24ede26b64fbed53528b35dc7a640c229dd06c61fe37d"
	Sep 21 22:41:02 default-k8s-different-port-20220921221118-10174 kubelet[3855]: E0921 22:41:02.086090    3855 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:41:07 default-k8s-different-port-20220921221118-10174 kubelet[3855]: E0921 22:41:07.087652    3855 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:41:12 default-k8s-different-port-20220921221118-10174 kubelet[3855]: E0921 22:41:12.088444    3855 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:41:17 default-k8s-different-port-20220921221118-10174 kubelet[3855]: E0921 22:41:17.090203    3855 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:41:22 default-k8s-different-port-20220921221118-10174 kubelet[3855]: E0921 22:41:22.090946    3855 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:41:27 default-k8s-different-port-20220921221118-10174 kubelet[3855]: E0921 22:41:27.092386    3855 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:41:32 default-k8s-different-port-20220921221118-10174 kubelet[3855]: E0921 22:41:32.093607    3855 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:41:37 default-k8s-different-port-20220921221118-10174 kubelet[3855]: E0921 22:41:37.095414    3855 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:41:42 default-k8s-different-port-20220921221118-10174 kubelet[3855]: E0921 22:41:42.096363    3855 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:41:47 default-k8s-different-port-20220921221118-10174 kubelet[3855]: E0921 22:41:47.097291    3855 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:41:52 default-k8s-different-port-20220921221118-10174 kubelet[3855]: E0921 22:41:52.098879    3855 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Sep 21 22:41:57 default-k8s-different-port-20220921221118-10174 kubelet[3855]: E0921 22:41:57.100388    3855 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220921221118-10174 -n default-k8s-different-port-20220921221118-10174
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-different-port-20220921221118-10174 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: coredns-565d847f94-mrw5b metrics-server-5c8fd5cf8-5bk5h storage-provisioner dashboard-metrics-scraper-7b94984548-x6wkq kubernetes-dashboard-54596f475f-z5fzh
helpers_test.go:272: ======> post-mortem[TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context default-k8s-different-port-20220921221118-10174 describe pod coredns-565d847f94-mrw5b metrics-server-5c8fd5cf8-5bk5h storage-provisioner dashboard-metrics-scraper-7b94984548-x6wkq kubernetes-dashboard-54596f475f-z5fzh
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20220921221118-10174 describe pod coredns-565d847f94-mrw5b metrics-server-5c8fd5cf8-5bk5h storage-provisioner dashboard-metrics-scraper-7b94984548-x6wkq kubernetes-dashboard-54596f475f-z5fzh: exit status 1 (59.483202ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-565d847f94-mrw5b" not found
	Error from server (NotFound): pods "metrics-server-5c8fd5cf8-5bk5h" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-7b94984548-x6wkq" not found
	Error from server (NotFound): pods "kubernetes-dashboard-54596f475f-z5fzh" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context default-k8s-different-port-20220921221118-10174 describe pod coredns-565d847f94-mrw5b metrics-server-5c8fd5cf8-5bk5h storage-provisioner dashboard-metrics-scraper-7b94984548-x6wkq kubernetes-dashboard-54596f475f-z5fzh: exit status 1
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (542.25s)

                                                
                                    

Test pass (227/266)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 15.47
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.08
10 TestDownloadOnly/v1.25.2/json-events 7.18
11 TestDownloadOnly/v1.25.2/preload-exists 0
15 TestDownloadOnly/v1.25.2/LogsDuration 0.08
16 TestDownloadOnly/DeleteAll 0.25
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.16
18 TestDownloadOnlyKic 4.01
19 TestBinaryMirror 0.81
20 TestOffline 87.43
22 TestAddons/Setup 148.32
24 TestAddons/parallel/Registry 21.28
25 TestAddons/parallel/Ingress 25.09
26 TestAddons/parallel/MetricsServer 5.57
27 TestAddons/parallel/HelmTiller 14.37
29 TestAddons/parallel/CSI 38.8
30 TestAddons/parallel/Headlamp 11.06
32 TestAddons/serial/GCPAuth 42.03
33 TestAddons/StoppedEnableDisable 20.22
34 TestCertOptions 27.03
35 TestCertExpiration 231.67
37 TestForceSystemdFlag 40.46
38 TestForceSystemdEnv 62.63
39 TestKVMDriverInstallOrUpdate 6.46
43 TestErrorSpam/setup 22.9
44 TestErrorSpam/start 0.89
45 TestErrorSpam/status 1.06
46 TestErrorSpam/pause 1.57
47 TestErrorSpam/unpause 1.56
48 TestErrorSpam/stop 1.48
51 TestFunctional/serial/CopySyncFile 0
52 TestFunctional/serial/StartWithProxy 44.39
53 TestFunctional/serial/AuditLog 0
54 TestFunctional/serial/SoftStart 15.52
55 TestFunctional/serial/KubeContext 0.04
56 TestFunctional/serial/KubectlGetPods 0.08
59 TestFunctional/serial/CacheCmd/cache/add_remote 4.14
60 TestFunctional/serial/CacheCmd/cache/add_local 2.11
61 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.07
62 TestFunctional/serial/CacheCmd/cache/list 0.07
63 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.34
64 TestFunctional/serial/CacheCmd/cache/cache_reload 2.16
65 TestFunctional/serial/CacheCmd/cache/delete 0.14
66 TestFunctional/serial/MinikubeKubectlCmd 0.12
67 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
68 TestFunctional/serial/ExtraConfig 32.55
69 TestFunctional/serial/ComponentHealth 0.07
70 TestFunctional/serial/LogsCmd 1.09
71 TestFunctional/serial/LogsFileCmd 1.11
73 TestFunctional/parallel/ConfigCmd 0.5
74 TestFunctional/parallel/DashboardCmd 13.13
75 TestFunctional/parallel/DryRun 0.53
76 TestFunctional/parallel/InternationalLanguage 0.35
77 TestFunctional/parallel/StatusCmd 1.2
80 TestFunctional/parallel/ServiceCmd 11.08
81 TestFunctional/parallel/ServiceCmdConnect 11.89
82 TestFunctional/parallel/AddonsCmd 0.2
83 TestFunctional/parallel/PersistentVolumeClaim 36.66
85 TestFunctional/parallel/SSHCmd 0.77
86 TestFunctional/parallel/CpCmd 1.49
87 TestFunctional/parallel/MySQL 29.68
88 TestFunctional/parallel/FileSync 0.43
89 TestFunctional/parallel/CertSync 2.41
93 TestFunctional/parallel/NodeLabels 0.07
95 TestFunctional/parallel/NonActiveRuntimeDisabled 0.77
97 TestFunctional/parallel/Version/short 0.1
98 TestFunctional/parallel/Version/components 0.65
99 TestFunctional/parallel/ImageCommands/ImageListShort 0.25
100 TestFunctional/parallel/ImageCommands/ImageListTable 0.26
101 TestFunctional/parallel/ImageCommands/ImageListJson 0.26
102 TestFunctional/parallel/ImageCommands/ImageListYaml 0.27
103 TestFunctional/parallel/ImageCommands/ImageBuild 3.46
104 TestFunctional/parallel/ImageCommands/Setup 1.47
105 TestFunctional/parallel/UpdateContextCmd/no_changes 0.21
106 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.22
107 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.2
108 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.19
109 TestFunctional/parallel/ProfileCmd/profile_not_create 0.52
111 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
113 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 19.21
114 TestFunctional/parallel/ProfileCmd/profile_list 0.5
115 TestFunctional/parallel/ProfileCmd/profile_json_output 0.52
116 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 5.68
117 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.91
118 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.83
119 TestFunctional/parallel/ImageCommands/ImageRemove 0.78
120 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.35
121 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.07
122 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
126 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
127 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.51
128 TestFunctional/parallel/MountCmd/any-port 9.35
129 TestFunctional/parallel/MountCmd/specific-port 2.37
130 TestFunctional/delete_addon-resizer_images 0.08
131 TestFunctional/delete_my-image_image 0.02
132 TestFunctional/delete_minikube_cached_images 0.02
135 TestIngressAddonLegacy/StartLegacyK8sCluster 71.59
137 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 14.14
138 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.38
139 TestIngressAddonLegacy/serial/ValidateIngressAddons 30.54
142 TestJSONOutput/start/Command 44.03
143 TestJSONOutput/start/Audit 0
145 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
146 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
148 TestJSONOutput/pause/Command 0.69
149 TestJSONOutput/pause/Audit 0
151 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
152 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
154 TestJSONOutput/unpause/Command 0.61
155 TestJSONOutput/unpause/Audit 0
157 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
158 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
160 TestJSONOutput/stop/Command 5.8
161 TestJSONOutput/stop/Audit 0
163 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
164 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
165 TestErrorJSONOutput 0.27
167 TestKicCustomNetwork/create_custom_network 36.49
168 TestKicCustomNetwork/use_default_bridge_network 28.79
169 TestKicExistingNetwork 29.65
170 TestKicCustomSubnet 29.93
171 TestMainNoArgs 0.07
172 TestMinikubeProfile 52.45
175 TestMountStart/serial/StartWithMountFirst 5.2
176 TestMountStart/serial/VerifyMountFirst 0.32
177 TestMountStart/serial/StartWithMountSecond 5.23
178 TestMountStart/serial/VerifyMountSecond 0.32
179 TestMountStart/serial/DeleteFirst 1.68
180 TestMountStart/serial/VerifyMountPostDelete 0.31
181 TestMountStart/serial/Stop 1.25
182 TestMountStart/serial/RestartStopped 6.72
183 TestMountStart/serial/VerifyMountPostStop 0.32
186 TestMultiNode/serial/FreshStart2Nodes 89.54
187 TestMultiNode/serial/DeployApp2Nodes 4.5
188 TestMultiNode/serial/PingHostFrom2Pods 0.88
189 TestMultiNode/serial/AddNode 41.47
190 TestMultiNode/serial/ProfileList 0.35
191 TestMultiNode/serial/CopyFile 11.37
192 TestMultiNode/serial/StopNode 2.35
193 TestMultiNode/serial/StartAfterStop 30.89
194 TestMultiNode/serial/RestartKeepsNodes 155.8
195 TestMultiNode/serial/DeleteNode 4.89
196 TestMultiNode/serial/StopMultiNode 39.99
197 TestMultiNode/serial/RestartMultiNode 105.02
198 TestMultiNode/serial/ValidateNameConflict 25.26
203 TestPreload 115.45
205 TestScheduledStopUnix 99.39
208 TestInsufficientStorage 15.82
209 TestRunningBinaryUpgrade 75.22
212 TestMissingContainerUpgrade 143.55
214 TestStoppedBinaryUpgrade/Setup 0.47
215 TestNoKubernetes/serial/StartNoK8sWithVersion 0.12
216 TestNoKubernetes/serial/StartWithK8s 45.38
217 TestStoppedBinaryUpgrade/Upgrade 119.4
218 TestNoKubernetes/serial/StartWithStopK8s 17.68
219 TestNoKubernetes/serial/Start 6.69
220 TestNoKubernetes/serial/VerifyK8sNotRunning 0.37
221 TestNoKubernetes/serial/ProfileList 1.28
222 TestNoKubernetes/serial/Stop 1.3
223 TestNoKubernetes/serial/StartNoArgs 6.59
224 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.38
232 TestNetworkPlugins/group/false 0.51
236 TestStoppedBinaryUpgrade/MinikubeLogs 0.92
245 TestPause/serial/Start 57.98
246 TestPause/serial/SecondStartNoReconfiguration 16.17
247 TestNetworkPlugins/group/auto/Start 58.28
248 TestPause/serial/Pause 0.91
249 TestPause/serial/VerifyStatus 0.39
250 TestPause/serial/Unpause 0.65
251 TestPause/serial/PauseAgain 0.93
252 TestPause/serial/DeletePaused 2.54
253 TestPause/serial/VerifyDeletedResources 14.05
254 TestNetworkPlugins/group/kindnet/Start 46.02
255 TestNetworkPlugins/group/cilium/Start 106.07
256 TestNetworkPlugins/group/auto/KubeletFlags 0.34
257 TestNetworkPlugins/group/auto/NetCatPod 10.25
258 TestNetworkPlugins/group/auto/DNS 0.14
259 TestNetworkPlugins/group/auto/Localhost 0.12
260 TestNetworkPlugins/group/auto/HairPin 0.12
262 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
263 TestNetworkPlugins/group/kindnet/KubeletFlags 0.59
264 TestNetworkPlugins/group/kindnet/NetCatPod 9.32
265 TestNetworkPlugins/group/kindnet/DNS 0.14
266 TestNetworkPlugins/group/kindnet/Localhost 0.15
267 TestNetworkPlugins/group/kindnet/HairPin 0.13
268 TestNetworkPlugins/group/enable-default-cni/Start 299.02
269 TestNetworkPlugins/group/cilium/ControllerPod 5.02
270 TestNetworkPlugins/group/cilium/KubeletFlags 0.38
271 TestNetworkPlugins/group/cilium/NetCatPod 9.87
272 TestNetworkPlugins/group/cilium/DNS 0.14
273 TestNetworkPlugins/group/cilium/Localhost 0.14
274 TestNetworkPlugins/group/cilium/HairPin 0.12
275 TestNetworkPlugins/group/bridge/Start 38.36
276 TestNetworkPlugins/group/bridge/KubeletFlags 0.35
277 TestNetworkPlugins/group/bridge/NetCatPod 9.23
281 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.36
282 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.23
285 TestStartStop/group/old-k8s-version/serial/FirstStart 118.85
289 TestStartStop/group/old-k8s-version/serial/DeployApp 8.34
290 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.61
291 TestStartStop/group/old-k8s-version/serial/Stop 20.08
292 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
293 TestStartStop/group/old-k8s-version/serial/SecondStart 433.54
298 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.01
299 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
300 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.36
301 TestStartStop/group/old-k8s-version/serial/Pause 3.07
303 TestStartStop/group/newest-cni/serial/FirstStart 35.92
304 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.72
305 TestStartStop/group/embed-certs/serial/Stop 4.72
306 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.22
308 TestStartStop/group/newest-cni/serial/DeployApp 0
309 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.58
310 TestStartStop/group/newest-cni/serial/Stop 1.29
311 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.22
312 TestStartStop/group/newest-cni/serial/SecondStart 29.18
313 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
314 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
315 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.36
316 TestStartStop/group/newest-cni/serial/Pause 2.92
317 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.63
318 TestStartStop/group/no-preload/serial/Stop 1.27
319 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
321 TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive 0.65
322 TestStartStop/group/default-k8s-different-port/serial/Stop 1.77
323 TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop 0.2
x
+
TestDownloadOnly/v1.16.0/json-events (15.47s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220921212711-10174 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220921212711-10174 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (15.47459594s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (15.47s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20220921212711-10174
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20220921212711-10174: exit status 85 (79.668452ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------|------------------------------------|---------|---------|---------------------|----------|
	| Command |                Args                |              Profile               |  User   | Version |     Start Time      | End Time |
	|---------|------------------------------------|------------------------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only -p         | download-only-20220921212711-10174 | jenkins | v1.27.0 | 21 Sep 22 21:27 UTC |          |
	|         | download-only-20220921212711-10174 |                                    |         |         |                     |          |
	|         | --force --alsologtostderr          |                                    |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0       |                                    |         |         |                     |          |
	|         | --container-runtime=containerd     |                                    |         |         |                     |          |
	|         | --driver=docker                    |                                    |         |         |                     |          |
	|         | --container-runtime=containerd     |                                    |         |         |                     |          |
	|---------|------------------------------------|------------------------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/09/21 21:27:11
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.19.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0921 21:27:11.976989   10186 out.go:296] Setting OutFile to fd 1 ...
	I0921 21:27:11.977168   10186 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 21:27:11.977181   10186 out.go:309] Setting ErrFile to fd 2...
	I0921 21:27:11.977189   10186 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 21:27:11.977311   10186 root.go:333] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/bin
	W0921 21:27:11.977423   10186 root.go:310] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/config/config.json: no such file or directory
	I0921 21:27:11.978022   10186 out.go:303] Setting JSON to true
	I0921 21:27:11.978823   10186 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":583,"bootTime":1663795049,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1017-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0921 21:27:11.978886   10186 start.go:125] virtualization: kvm guest
	I0921 21:27:11.981736   10186 out.go:97] [download-only-20220921212711-10174] minikube v1.27.0 on Ubuntu 20.04 (kvm/amd64)
	W0921 21:27:11.981845   10186 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/preloaded-tarball: no such file or directory
	I0921 21:27:11.981860   10186 notify.go:214] Checking for updates...
	I0921 21:27:11.983416   10186 out.go:169] MINIKUBE_LOCATION=14995
	I0921 21:27:11.984944   10186 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0921 21:27:11.986404   10186 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
	I0921 21:27:11.987754   10186 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube
	I0921 21:27:11.989121   10186 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0921 21:27:11.991418   10186 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0921 21:27:11.991573   10186 driver.go:365] Setting default libvirt URI to qemu:///system
	I0921 21:27:12.017271   10186 docker.go:137] docker version: linux-20.10.18
	I0921 21:27:12.017354   10186 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 21:27:12.735775   10186 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:27 OomKillDisable:true NGoroutines:34 SystemTime:2022-09-21 21:27:12.035953827 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1017-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.18 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0921 21:27:12.735876   10186 docker.go:254] overlay module found
	I0921 21:27:12.737725   10186 out.go:97] Using the docker driver based on user configuration
	I0921 21:27:12.737744   10186 start.go:284] selected driver: docker
	I0921 21:27:12.737749   10186 start.go:808] validating driver "docker" against <nil>
	I0921 21:27:12.737827   10186 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 21:27:12.837036   10186 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:27 OomKillDisable:true NGoroutines:34 SystemTime:2022-09-21 21:27:12.754650861 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1017-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.18 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0921 21:27:12.837189   10186 start_flags.go:302] no existing cluster config was found, will generate one from the flags 
	I0921 21:27:12.837673   10186 start_flags.go:383] Using suggested 8000MB memory alloc based on sys=32101MB, container=32101MB
	I0921 21:27:12.837783   10186 start_flags.go:849] Wait components to verify : map[apiserver:true system_pods:true]
	I0921 21:27:12.839816   10186 out.go:169] Using Docker driver with root privileges
	I0921 21:27:12.841061   10186 cni.go:95] Creating CNI manager for ""
	I0921 21:27:12.841077   10186 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0921 21:27:12.841104   10186 cni.go:225] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0921 21:27:12.841116   10186 cni.go:230] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0921 21:27:12.841121   10186 start_flags.go:311] Found "CNI" CNI - setting NetworkPlugin=cni
	I0921 21:27:12.841132   10186 start_flags.go:316] config:
	{Name:download-only-20220921212711-10174 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-20220921212711-10174 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 21:27:12.842615   10186 out.go:97] Starting control plane node download-only-20220921212711-10174 in cluster download-only-20220921212711-10174
	I0921 21:27:12.842639   10186 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0921 21:27:12.843800   10186 out.go:97] Pulling base image ...
	I0921 21:27:12.843823   10186 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0921 21:27:12.843947   10186 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	I0921 21:27:12.863987   10186 cache.go:147] Downloading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local cache
	I0921 21:27:12.864291   10186 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local cache directory
	I0921 21:27:12.864399   10186 image.go:119] Writing gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local cache
	I0921 21:27:12.951848   10186 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I0921 21:27:12.951876   10186 cache.go:57] Caching tarball of preloaded images
	I0921 21:27:12.952067   10186 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0921 21:27:12.954267   10186 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0921 21:27:12.954288   10186 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I0921 21:27:13.066678   10186 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:d96a2b2afa188e17db7ddabb58d563fd -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I0921 21:27:15.396935   10186 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I0921 21:27:15.397024   10186 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I0921 21:27:16.259525   10186 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on containerd
	I0921 21:27:16.259860   10186 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/download-only-20220921212711-10174/config.json ...
	I0921 21:27:16.259893   10186 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/download-only-20220921212711-10174/config.json: {Name:mk984272d38a544e9e3b2099a5690199e2b347cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0921 21:27:16.260070   10186 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0921 21:27:16.260276   10186 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/linux/amd64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220921212711-10174"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.2/json-events (7.18s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.2/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220921212711-10174 --force --alsologtostderr --kubernetes-version=v1.25.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220921212711-10174 --force --alsologtostderr --kubernetes-version=v1.25.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (7.179543351s)
--- PASS: TestDownloadOnly/v1.25.2/json-events (7.18s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.2/preload-exists
--- PASS: TestDownloadOnly/v1.25.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.2/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20220921212711-10174
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20220921212711-10174: exit status 85 (80.612918ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------|------------------------------------|---------|---------|---------------------|----------|
	| Command |                Args                |              Profile               |  User   | Version |     Start Time      | End Time |
	|---------|------------------------------------|------------------------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only -p         | download-only-20220921212711-10174 | jenkins | v1.27.0 | 21 Sep 22 21:27 UTC |          |
	|         | download-only-20220921212711-10174 |                                    |         |         |                     |          |
	|         | --force --alsologtostderr          |                                    |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0       |                                    |         |         |                     |          |
	|         | --container-runtime=containerd     |                                    |         |         |                     |          |
	|         | --driver=docker                    |                                    |         |         |                     |          |
	|         | --container-runtime=containerd     |                                    |         |         |                     |          |
	| start   | -o=json --download-only -p         | download-only-20220921212711-10174 | jenkins | v1.27.0 | 21 Sep 22 21:27 UTC |          |
	|         | download-only-20220921212711-10174 |                                    |         |         |                     |          |
	|         | --force --alsologtostderr          |                                    |         |         |                     |          |
	|         | --kubernetes-version=v1.25.2       |                                    |         |         |                     |          |
	|         | --container-runtime=containerd     |                                    |         |         |                     |          |
	|         | --driver=docker                    |                                    |         |         |                     |          |
	|         | --container-runtime=containerd     |                                    |         |         |                     |          |
	|---------|------------------------------------|------------------------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/09/21 21:27:27
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.19.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0921 21:27:27.535651   10349 out.go:296] Setting OutFile to fd 1 ...
	I0921 21:27:27.535776   10349 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 21:27:27.535788   10349 out.go:309] Setting ErrFile to fd 2...
	I0921 21:27:27.535793   10349 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 21:27:27.535904   10349 root.go:333] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/bin
	W0921 21:27:27.536038   10349 root.go:310] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/config/config.json: no such file or directory
	I0921 21:27:27.536478   10349 out.go:303] Setting JSON to true
	I0921 21:27:27.537252   10349 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":599,"bootTime":1663795049,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1017-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0921 21:27:27.537315   10349 start.go:125] virtualization: kvm guest
	I0921 21:27:27.539592   10349 out.go:97] [download-only-20220921212711-10174] minikube v1.27.0 on Ubuntu 20.04 (kvm/amd64)
	I0921 21:27:27.539714   10349 notify.go:214] Checking for updates...
	I0921 21:27:27.541153   10349 out.go:169] MINIKUBE_LOCATION=14995
	I0921 21:27:27.542482   10349 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0921 21:27:27.543688   10349 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
	I0921 21:27:27.544919   10349 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube
	I0921 21:27:27.546159   10349 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0921 21:27:27.548344   10349 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0921 21:27:27.548748   10349 config.go:180] Loaded profile config "download-only-20220921212711-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	W0921 21:27:27.548789   10349 start.go:716] api.Load failed for download-only-20220921212711-10174: filestore "download-only-20220921212711-10174": Docker machine "download-only-20220921212711-10174" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0921 21:27:27.548834   10349 driver.go:365] Setting default libvirt URI to qemu:///system
	W0921 21:27:27.548859   10349 start.go:716] api.Load failed for download-only-20220921212711-10174: filestore "download-only-20220921212711-10174": Docker machine "download-only-20220921212711-10174" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0921 21:27:27.574007   10349 docker.go:137] docker version: linux-20.10.18
	I0921 21:27:27.574098   10349 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 21:27:27.665281   10349 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:27 OomKillDisable:true NGoroutines:34 SystemTime:2022-09-21 21:27:27.592036974 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1017-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.18 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0921 21:27:27.665401   10349 docker.go:254] overlay module found
	I0921 21:27:27.667382   10349 out.go:97] Using the docker driver based on existing profile
	I0921 21:27:27.667409   10349 start.go:284] selected driver: docker
	I0921 21:27:27.667428   10349 start.go:808] validating driver "docker" against &{Name:download-only-20220921212711-10174 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-20220921212711-10174 Namespace:default APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 21:27:27.667633   10349 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 21:27:27.753458   10349 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:27 OomKillDisable:true NGoroutines:34 SystemTime:2022-09-21 21:27:27.685720683 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1017-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.18 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0921 21:27:27.753996   10349 cni.go:95] Creating CNI manager for ""
	I0921 21:27:27.754014   10349 cni.go:162] "docker" driver + containerd runtime found, recommending kindnet
	I0921 21:27:27.754028   10349 start_flags.go:316] config:
	{Name:download-only-20220921212711-10174 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:download-only-20220921212711-10174 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/so
cket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 21:27:27.755922   10349 out.go:97] Starting control plane node download-only-20220921212711-10174 in cluster download-only-20220921212711-10174
	I0921 21:27:27.755959   10349 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0921 21:27:27.757491   10349 out.go:97] Pulling base image ...
	I0921 21:27:27.757525   10349 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime containerd
	I0921 21:27:27.757643   10349 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local docker daemon
	I0921 21:27:27.777580   10349 cache.go:147] Downloading gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c to local cache
	I0921 21:27:27.777828   10349 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local cache directory
	I0921 21:27:27.777848   10349 image.go:62] Found gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c in local cache directory, skipping pull
	I0921 21:27:27.777853   10349 image.go:103] gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c exists in cache, skipping pull
	I0921 21:27:27.777868   10349 cache.go:150] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c as a tarball
	I0921 21:27:27.868016   10349 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.25.2/preloaded-images-k8s-v18-v1.25.2-containerd-overlay2-amd64.tar.lz4
	I0921 21:27:27.868045   10349 cache.go:57] Caching tarball of preloaded images
	I0921 21:27:27.868230   10349 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime containerd
	I0921 21:27:27.870230   10349 out.go:97] Downloading Kubernetes v1.25.2 preload ...
	I0921 21:27:27.870249   10349 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.25.2-containerd-overlay2-amd64.tar.lz4 ...
	I0921 21:27:27.977569   10349 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.25.2/preloaded-images-k8s-v18-v1.25.2-containerd-overlay2-amd64.tar.lz4?checksum=md5:08332f21825777b3899a58ae6e7093d9 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.2-containerd-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220921212711-10174"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.25.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.25s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:191: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.25s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:203: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-20220921212711-10174
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestDownloadOnlyKic (4.01s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:228: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-20220921212735-10174 --force --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:228: (dbg) Done: out/minikube-linux-amd64 start --download-only -p download-docker-20220921212735-10174 --force --alsologtostderr --driver=docker  --container-runtime=containerd: (2.960649047s)
helpers_test.go:175: Cleaning up "download-docker-20220921212735-10174" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-20220921212735-10174
--- PASS: TestDownloadOnlyKic (4.01s)

                                                
                                    
x
+
TestBinaryMirror (0.81s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:310: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-20220921212739-10174 --alsologtostderr --binary-mirror http://127.0.0.1:38241 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-20220921212739-10174" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-20220921212739-10174
--- PASS: TestBinaryMirror (0.81s)

                                                
                                    
x
+
TestOffline (87.43s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-20220921215355-10174 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-20220921215355-10174 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=containerd: (1m20.187031918s)
helpers_test.go:175: Cleaning up "offline-containerd-20220921215355-10174" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-20220921215355-10174

                                                
                                                
=== CONT  TestOffline
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-20220921215355-10174: (7.241332s)
--- PASS: TestOffline (87.43s)

                                                
                                    
x
+
TestAddons/Setup (148.32s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:76: (dbg) Run:  out/minikube-linux-amd64 start -p addons-20220921212740-10174 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:76: (dbg) Done: out/minikube-linux-amd64 start -p addons-20220921212740-10174 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m28.323432249s)
--- PASS: TestAddons/Setup (148.32s)

                                                
                                    
x
+
TestAddons/parallel/Registry (21.28s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:282: registry stabilized in 10.370837ms

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:284: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:342: "registry-58fsl" [1b407a59-f892-4c29-b9f8-5af473a4b383] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:284: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.008918553s

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:287: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:342: "registry-proxy-697x2" [d62c8d20-4e4b-4c4f-b084-da805fa8505e] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:287: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.032858791s
addons_test.go:292: (dbg) Run:  kubectl --context addons-20220921212740-10174 delete po -l run=registry-test --now
addons_test.go:297: (dbg) Run:  kubectl --context addons-20220921212740-10174 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:297: (dbg) Done: kubectl --context addons-20220921212740-10174 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (10.503448995s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220921212740-10174 ip
2022/09/21 21:30:29 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:340: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220921212740-10174 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (21.28s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (25.09s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:164: (dbg) Run:  kubectl --context addons-20220921212740-10174 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:184: (dbg) Run:  kubectl --context addons-20220921212740-10174 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:197: (dbg) Run:  kubectl --context addons-20220921212740-10174 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:202: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:342: "nginx" [1f1fb90b-0586-4e47-8afc-529fe7dcae52] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:342: "nginx" [1f1fb90b-0586-4e47-8afc-529fe7dcae52] Running

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:202: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 15.00627409s
addons_test.go:214: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220921212740-10174 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:238: (dbg) Run:  kubectl --context addons-20220921212740-10174 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220921212740-10174 ip

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:249: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:258: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220921212740-10174 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:258: (dbg) Done: out/minikube-linux-amd64 -p addons-20220921212740-10174 addons disable ingress-dns --alsologtostderr -v=1: (1.129670132s)
addons_test.go:263: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220921212740-10174 addons disable ingress --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:263: (dbg) Done: out/minikube-linux-amd64 -p addons-20220921212740-10174 addons disable ingress --alsologtostderr -v=1: (7.487156049s)
--- PASS: TestAddons/parallel/Ingress (25.09s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.57s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:359: metrics-server stabilized in 9.142764ms

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:361: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
helpers_test.go:342: "metrics-server-769cd898cd-9skn5" [746d2f4f-f32c-4536-a96b-c6442b879b6a] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:361: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.009512798s
addons_test.go:367: (dbg) Run:  kubectl --context addons-20220921212740-10174 top pods -n kube-system

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:384: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220921212740-10174 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.57s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (14.37s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:408: tiller-deploy stabilized in 9.231489ms

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:410: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
helpers_test.go:342: "tiller-deploy-696b5bfbb7-f7wsg" [d30a282e-f4af-4af5-a167-c3a56a3aa515] Running

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:410: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.009080194s

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:425: (dbg) Run:  kubectl --context addons-20220921212740-10174 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:425: (dbg) Done: kubectl --context addons-20220921212740-10174 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (8.575204762s)
addons_test.go:442: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220921212740-10174 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (14.37s)

                                                
                                    
x
+
TestAddons/parallel/CSI (38.8s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:513: csi-hostpath-driver pods stabilized in 7.139551ms
addons_test.go:516: (dbg) Run:  kubectl --context addons-20220921212740-10174 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:521: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-20220921212740-10174 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:526: (dbg) Run:  kubectl --context addons-20220921212740-10174 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:531: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:342: "task-pv-pod" [65895004-c1f6-4e52-8a8e-30b137e51056] Pending
helpers_test.go:342: "task-pv-pod" [65895004-c1f6-4e52-8a8e-30b137e51056] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod" [65895004-c1f6-4e52-8a8e-30b137e51056] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:531: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 17.005985527s
addons_test.go:536: (dbg) Run:  kubectl --context addons-20220921212740-10174 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:541: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:417: (dbg) Run:  kubectl --context addons-20220921212740-10174 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:417: (dbg) Run:  kubectl --context addons-20220921212740-10174 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:546: (dbg) Run:  kubectl --context addons-20220921212740-10174 delete pod task-pv-pod
addons_test.go:552: (dbg) Run:  kubectl --context addons-20220921212740-10174 delete pvc hpvc
addons_test.go:558: (dbg) Run:  kubectl --context addons-20220921212740-10174 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:563: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-20220921212740-10174 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:568: (dbg) Run:  kubectl --context addons-20220921212740-10174 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:573: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:342: "task-pv-pod-restore" [37d8bc6c-d595-4acd-86e5-b01cbb8588ff] Pending

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod-restore" [37d8bc6c-d595-4acd-86e5-b01cbb8588ff] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod-restore" [37d8bc6c-d595-4acd-86e5-b01cbb8588ff] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:573: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 10.005777777s
addons_test.go:578: (dbg) Run:  kubectl --context addons-20220921212740-10174 delete pod task-pv-pod-restore
addons_test.go:582: (dbg) Run:  kubectl --context addons-20220921212740-10174 delete pvc hpvc-restore
addons_test.go:586: (dbg) Run:  kubectl --context addons-20220921212740-10174 delete volumesnapshot new-snapshot-demo
addons_test.go:590: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220921212740-10174 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:590: (dbg) Done: out/minikube-linux-amd64 -p addons-20220921212740-10174 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.833206255s)
addons_test.go:594: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220921212740-10174 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (38.80s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (11.06s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:737: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-20220921212740-10174 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:737: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-20220921212740-10174 --alsologtostderr -v=1: (1.054558274s)
addons_test.go:742: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:342: "headlamp-788c8d94dd-rnqzt" [64e1387f-89d6-4a3c-b5be-b7a3f0485ac9] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
helpers_test.go:342: "headlamp-788c8d94dd-rnqzt" [64e1387f-89d6-4a3c-b5be-b7a3f0485ac9] Running

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:742: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.008624366s
--- PASS: TestAddons/parallel/Headlamp (11.06s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth (42.03s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth
addons_test.go:605: (dbg) Run:  kubectl --context addons-20220921212740-10174 create -f testdata/busybox.yaml
addons_test.go:612: (dbg) Run:  kubectl --context addons-20220921212740-10174 create sa gcp-auth-test
addons_test.go:618: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [38197fb5-4aa0-4c69-9503-0a56934455cc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [38197fb5-4aa0-4c69-9503-0a56934455cc] Running
addons_test.go:618: (dbg) TestAddons/serial/GCPAuth: integration-test=busybox healthy within 9.005703591s
addons_test.go:624: (dbg) Run:  kubectl --context addons-20220921212740-10174 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:636: (dbg) Run:  kubectl --context addons-20220921212740-10174 describe sa gcp-auth-test
addons_test.go:674: (dbg) Run:  kubectl --context addons-20220921212740-10174 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
addons_test.go:687: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220921212740-10174 addons disable gcp-auth --alsologtostderr -v=1
addons_test.go:687: (dbg) Done: out/minikube-linux-amd64 -p addons-20220921212740-10174 addons disable gcp-auth --alsologtostderr -v=1: (6.091576506s)
addons_test.go:703: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220921212740-10174 addons enable gcp-auth
addons_test.go:703: (dbg) Done: out/minikube-linux-amd64 -p addons-20220921212740-10174 addons enable gcp-auth: (2.126908295s)
addons_test.go:709: (dbg) Run:  kubectl --context addons-20220921212740-10174 apply -f testdata/private-image.yaml
addons_test.go:716: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=private-image" in namespace "default" ...
helpers_test.go:342: "private-image-5c86c669bd-rj96m" [dc6591b6-e203-448f-a57f-a31163c52794] Pending
helpers_test.go:342: "private-image-5c86c669bd-rj96m" [dc6591b6-e203-448f-a57f-a31163c52794] Pending / Ready:ContainersNotReady (containers with unready status: [private-image]) / ContainersReady:ContainersNotReady (containers with unready status: [private-image])
helpers_test.go:342: "private-image-5c86c669bd-rj96m" [dc6591b6-e203-448f-a57f-a31163c52794] Running
addons_test.go:716: (dbg) TestAddons/serial/GCPAuth: integration-test=private-image healthy within 16.007635651s
addons_test.go:722: (dbg) Run:  kubectl --context addons-20220921212740-10174 apply -f testdata/private-image-eu.yaml
addons_test.go:727: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=private-image-eu" in namespace "default" ...
helpers_test.go:342: "private-image-eu-64c96f687b-z45vr" [d8e21c0c-5cd8-46b3-9721-eac99149ea21] Pending
helpers_test.go:342: "private-image-eu-64c96f687b-z45vr" [d8e21c0c-5cd8-46b3-9721-eac99149ea21] Pending / Ready:ContainersNotReady (containers with unready status: [private-image-eu]) / ContainersReady:ContainersNotReady (containers with unready status: [private-image-eu])
helpers_test.go:342: "private-image-eu-64c96f687b-z45vr" [d8e21c0c-5cd8-46b3-9721-eac99149ea21] Running
addons_test.go:727: (dbg) TestAddons/serial/GCPAuth: integration-test=private-image-eu healthy within 7.007928191s
--- PASS: TestAddons/serial/GCPAuth (42.03s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (20.22s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:134: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-20220921212740-10174
addons_test.go:134: (dbg) Done: out/minikube-linux-amd64 stop -p addons-20220921212740-10174: (20.035719773s)
addons_test.go:138: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-20220921212740-10174
addons_test.go:142: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-20220921212740-10174
--- PASS: TestAddons/StoppedEnableDisable (20.22s)

                                                
                                    
x
+
TestCertOptions (27.03s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-20220921215754-10174 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
E0921 21:58:01.553591   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/ingress-addon-legacy-20220921213512-10174/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-20220921215754-10174 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (23.886617764s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-20220921215754-10174 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-20220921215754-10174 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-20220921215754-10174 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-20220921215754-10174" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-20220921215754-10174

                                                
                                                
=== CONT  TestCertOptions
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-20220921215754-10174: (2.412767504s)
--- PASS: TestCertOptions (27.03s)

                                                
                                    
x
+
TestCertExpiration (231.67s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-20220921215524-10174 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
E0921 21:55:43.526560   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/functional-20220921213235-10174/client.crt: no such file or directory

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-20220921215524-10174 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (33.516734887s)

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-20220921215524-10174 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-20220921215524-10174 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (15.643979363s)
helpers_test.go:175: Cleaning up "cert-expiration-20220921215524-10174" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-20220921215524-10174
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-20220921215524-10174: (2.512748441s)
--- PASS: TestCertExpiration (231.67s)

                                                
                                    
x
+
TestForceSystemdFlag (40.46s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-20220921215558-10174 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-20220921215558-10174 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (36.267677264s)
docker_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-20220921215558-10174 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-20220921215558-10174" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-20220921215558-10174
E0921 21:56:38.505047   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/ingress-addon-legacy-20220921213512-10174/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-20220921215558-10174: (3.805348918s)
--- PASS: TestForceSystemdFlag (40.46s)

                                                
                                    
x
+
TestForceSystemdEnv (62.63s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-20220921215355-10174 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-20220921215355-10174 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (59.399583539s)
docker_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-20220921215355-10174 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-20220921215355-10174" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-20220921215355-10174

                                                
                                                
=== CONT  TestForceSystemdEnv
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-20220921215355-10174: (2.634767911s)
--- PASS: TestForceSystemdEnv (62.63s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (6.46s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (6.46s)

                                                
                                    
x
+
TestErrorSpam/setup (22.9s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20220921213200-10174 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-20220921213200-10174 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-20220921213200-10174 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-20220921213200-10174 --driver=docker  --container-runtime=containerd: (22.904569914s)
--- PASS: TestErrorSpam/setup (22.90s)

                                                
                                    
x
+
TestErrorSpam/start (0.89s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220921213200-10174 --log_dir /tmp/nospam-20220921213200-10174 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220921213200-10174 --log_dir /tmp/nospam-20220921213200-10174 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220921213200-10174 --log_dir /tmp/nospam-20220921213200-10174 start --dry-run
--- PASS: TestErrorSpam/start (0.89s)

                                                
                                    
x
+
TestErrorSpam/status (1.06s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220921213200-10174 --log_dir /tmp/nospam-20220921213200-10174 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220921213200-10174 --log_dir /tmp/nospam-20220921213200-10174 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220921213200-10174 --log_dir /tmp/nospam-20220921213200-10174 status
--- PASS: TestErrorSpam/status (1.06s)

                                                
                                    
x
+
TestErrorSpam/pause (1.57s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220921213200-10174 --log_dir /tmp/nospam-20220921213200-10174 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220921213200-10174 --log_dir /tmp/nospam-20220921213200-10174 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220921213200-10174 --log_dir /tmp/nospam-20220921213200-10174 pause
--- PASS: TestErrorSpam/pause (1.57s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.56s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220921213200-10174 --log_dir /tmp/nospam-20220921213200-10174 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220921213200-10174 --log_dir /tmp/nospam-20220921213200-10174 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220921213200-10174 --log_dir /tmp/nospam-20220921213200-10174 unpause
--- PASS: TestErrorSpam/unpause (1.56s)

                                                
                                    
x
+
TestErrorSpam/stop (1.48s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220921213200-10174 --log_dir /tmp/nospam-20220921213200-10174 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-20220921213200-10174 --log_dir /tmp/nospam-20220921213200-10174 stop: (1.247764944s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220921213200-10174 --log_dir /tmp/nospam-20220921213200-10174 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220921213200-10174 --log_dir /tmp/nospam-20220921213200-10174 stop
--- PASS: TestErrorSpam/stop (1.48s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1781: local sync path: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/test/nested/copy/10174/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (44.39s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2160: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220921213235-10174 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2160: (dbg) Done: out/minikube-linux-amd64 start -p functional-20220921213235-10174 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (44.38796546s)
--- PASS: TestFunctional/serial/StartWithProxy (44.39s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (15.52s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:651: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220921213235-10174 --alsologtostderr -v=8
functional_test.go:651: (dbg) Done: out/minikube-linux-amd64 start -p functional-20220921213235-10174 --alsologtostderr -v=8: (15.521768657s)
functional_test.go:655: soft start took 15.522397353s for "functional-20220921213235-10174" cluster.
--- PASS: TestFunctional/serial/SoftStart (15.52s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:673: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:688: (dbg) Run:  kubectl --context functional-20220921213235-10174 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1041: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220921213235-10174 cache add k8s.gcr.io/pause:3.1
functional_test.go:1041: (dbg) Done: out/minikube-linux-amd64 -p functional-20220921213235-10174 cache add k8s.gcr.io/pause:3.1: (1.433771613s)
functional_test.go:1041: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220921213235-10174 cache add k8s.gcr.io/pause:3.3
functional_test.go:1041: (dbg) Done: out/minikube-linux-amd64 -p functional-20220921213235-10174 cache add k8s.gcr.io/pause:3.3: (1.617195962s)
functional_test.go:1041: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220921213235-10174 cache add k8s.gcr.io/pause:latest
functional_test.go:1041: (dbg) Done: out/minikube-linux-amd64 -p functional-20220921213235-10174 cache add k8s.gcr.io/pause:latest: (1.08658155s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.14s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1069: (dbg) Run:  docker build -t minikube-local-cache-test:functional-20220921213235-10174 /tmp/TestFunctionalserialCacheCmdcacheadd_local2226077248/001
functional_test.go:1081: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220921213235-10174 cache add minikube-local-cache-test:functional-20220921213235-10174
functional_test.go:1081: (dbg) Done: out/minikube-linux-amd64 -p functional-20220921213235-10174 cache add minikube-local-cache-test:functional-20220921213235-10174: (1.866257959s)
functional_test.go:1086: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220921213235-10174 cache delete minikube-local-cache-test:functional-20220921213235-10174
functional_test.go:1075: (dbg) Run:  docker rmi minikube-local-cache-test:functional-20220921213235-10174
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.34s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1116: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220921213235-10174 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.34s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220921213235-10174 ssh sudo crictl rmi k8s.gcr.io/pause:latest
functional_test.go:1145: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220921213235-10174 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1145: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220921213235-10174 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (334.242116ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1150: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220921213235-10174 cache reload
functional_test.go:1150: (dbg) Done: out/minikube-linux-amd64 -p functional-20220921213235-10174 cache reload: (1.149978787s)
functional_test.go:1155: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220921213235-10174 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.16s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1164: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1164: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:708: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220921213235-10174 kubectl -- --context functional-20220921213235-10174 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:733: (dbg) Run:  out/kubectl --context functional-20220921213235-10174 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (32.55s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:749: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220921213235-10174 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:749: (dbg) Done: out/minikube-linux-amd64 start -p functional-20220921213235-10174 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (32.547304894s)
functional_test.go:753: restart took 32.547397903s for "functional-20220921213235-10174" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (32.55s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:802: (dbg) Run:  kubectl --context functional-20220921213235-10174 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:817: etcd phase: Running
functional_test.go:827: etcd status: Ready
functional_test.go:817: kube-apiserver phase: Running
functional_test.go:827: kube-apiserver status: Ready
functional_test.go:817: kube-controller-manager phase: Running
functional_test.go:827: kube-controller-manager status: Ready
functional_test.go:817: kube-scheduler phase: Running
functional_test.go:827: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.09s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1228: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220921213235-10174 logs
functional_test.go:1228: (dbg) Done: out/minikube-linux-amd64 -p functional-20220921213235-10174 logs: (1.090173122s)
--- PASS: TestFunctional/serial/LogsCmd (1.09s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.11s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1242: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220921213235-10174 logs --file /tmp/TestFunctionalserialLogsFileCmd778692612/001/logs.txt
functional_test.go:1242: (dbg) Done: out/minikube-linux-amd64 -p functional-20220921213235-10174 logs --file /tmp/TestFunctionalserialLogsFileCmd778692612/001/logs.txt: (1.111533688s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1191: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220921213235-10174 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1191: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220921213235-10174 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1191: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220921213235-10174 config get cpus: exit status 14 (77.350806ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1191: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220921213235-10174 config set cpus 2
functional_test.go:1191: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220921213235-10174 config get cpus
functional_test.go:1191: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220921213235-10174 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1191: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220921213235-10174 config get cpus
functional_test.go:1191: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220921213235-10174 config get cpus: exit status 14 (83.454914ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:897: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-20220921213235-10174 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:902: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-20220921213235-10174 --alsologtostderr -v=1] ...
helpers_test.go:506: unable to kill pid 47015: os: process already finished
E0921 21:35:09.086281   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/addons-20220921212740-10174/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/DashboardCmd (13.13s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:966: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220921213235-10174 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:966: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-20220921213235-10174 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (211.991558ms)

                                                
                                                
-- stdout --
	* [functional-20220921213235-10174] minikube v1.27.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14995
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0921 21:34:55.542532   46152 out.go:296] Setting OutFile to fd 1 ...
	I0921 21:34:55.542638   46152 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 21:34:55.542647   46152 out.go:309] Setting ErrFile to fd 2...
	I0921 21:34:55.542652   46152 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 21:34:55.542739   46152 root.go:333] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/bin
	I0921 21:34:55.543239   46152 out.go:303] Setting JSON to false
	I0921 21:34:55.544254   46152 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":1047,"bootTime":1663795049,"procs":370,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1017-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0921 21:34:55.544325   46152 start.go:125] virtualization: kvm guest
	I0921 21:34:55.546855   46152 out.go:177] * [functional-20220921213235-10174] minikube v1.27.0 on Ubuntu 20.04 (kvm/amd64)
	I0921 21:34:55.548335   46152 out.go:177]   - MINIKUBE_LOCATION=14995
	I0921 21:34:55.549838   46152 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0921 21:34:55.551213   46152 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
	I0921 21:34:55.552522   46152 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube
	I0921 21:34:55.553889   46152 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0921 21:34:55.555746   46152 config.go:180] Loaded profile config "functional-20220921213235-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.2
	I0921 21:34:55.556369   46152 driver.go:365] Setting default libvirt URI to qemu:///system
	I0921 21:34:55.586157   46152 docker.go:137] docker version: linux-20.10.18
	I0921 21:34:55.586259   46152 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 21:34:55.674014   46152 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:39 SystemTime:2022-09-21 21:34:55.606588762 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1017-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.18 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0921 21:34:55.674122   46152 docker.go:254] overlay module found
	I0921 21:34:55.676347   46152 out.go:177] * Using the docker driver based on existing profile
	I0921 21:34:55.677639   46152 start.go:284] selected driver: docker
	I0921 21:34:55.677656   46152 start.go:808] validating driver "docker" against &{Name:functional-20220921213235-10174 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:functional-20220921213235-10174 Namespace:default APIServerName:mi
nikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false por
tainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 21:34:55.677792   46152 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0921 21:34:55.680413   46152 out.go:177] 
	W0921 21:34:55.682070   46152 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0921 21:34:55.683505   46152 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:983: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220921213235-10174 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1012: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220921213235-10174 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1012: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-20220921213235-10174 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (353.433872ms)

                                                
                                                
-- stdout --
	* [functional-20220921213235-10174] minikube v1.27.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14995
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0921 21:34:49.973009   44371 out.go:296] Setting OutFile to fd 1 ...
	I0921 21:34:49.973258   44371 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 21:34:49.973273   44371 out.go:309] Setting ErrFile to fd 2...
	I0921 21:34:49.973281   44371 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 21:34:49.973476   44371 root.go:333] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/bin
	I0921 21:34:49.974042   44371 out.go:303] Setting JSON to false
	I0921 21:34:49.975070   44371 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":1041,"bootTime":1663795049,"procs":353,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1017-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0921 21:34:49.975134   44371 start.go:125] virtualization: kvm guest
	I0921 21:34:49.993826   44371 out.go:177] * [functional-20220921213235-10174] minikube v1.27.0 sur Ubuntu 20.04 (kvm/amd64)
	I0921 21:34:50.016403   44371 out.go:177]   - MINIKUBE_LOCATION=14995
	I0921 21:34:50.021876   44371 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0921 21:34:50.047954   44371 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
	I0921 21:34:50.065508   44371 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube
	I0921 21:34:50.080001   44371 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0921 21:34:50.095821   44371 config.go:180] Loaded profile config "functional-20220921213235-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.2
	I0921 21:34:50.096277   44371 driver.go:365] Setting default libvirt URI to qemu:///system
	I0921 21:34:50.127413   44371 docker.go:137] docker version: linux-20.10.18
	I0921 21:34:50.127515   44371 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 21:34:50.219086   44371 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:39 SystemTime:2022-09-21 21:34:50.147041066 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1017-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.18 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0921 21:34:50.219193   44371 docker.go:254] overlay module found
	I0921 21:34:50.233719   44371 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0921 21:34:50.237749   44371 start.go:284] selected driver: docker
	I0921 21:34:50.237780   44371 start.go:808] validating driver "docker" against &{Name:functional-20220921213235-10174 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:functional-20220921213235-10174 Namespace:default APIServerName:mi
nikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.25.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false por
tainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I0921 21:34:50.237931   44371 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0921 21:34:50.245994   44371 out.go:177] 
	W0921 21:34:50.249802   44371 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0921 21:34:50.252947   44371 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:846: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220921213235-10174 status

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:852: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220921213235-10174 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:864: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220921213235-10174 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (11.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1432: (dbg) Run:  kubectl --context functional-20220921213235-10174 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1438: (dbg) Run:  kubectl --context functional-20220921213235-10174 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1443: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:342: "hello-node-5fcdfb5cc4-8h747" [6b18d887-79c1-4e36-b7f8-427bb089dbdf] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:342: "hello-node-5fcdfb5cc4-8h747" [6b18d887-79c1-4e36-b7f8-427bb089dbdf] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1443: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 8.063971904s
functional_test.go:1448: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220921213235-10174 service list

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1462: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220921213235-10174 service --namespace=default --https --url hello-node

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1475: found endpoint: https://192.168.49.2:30850
functional_test.go:1490: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220921213235-10174 service hello-node --url --format={{.IP}}

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1504: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220921213235-10174 service hello-node --url

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1510: found endpoint for hello-node: http://192.168.49.2:30850
--- PASS: TestFunctional/parallel/ServiceCmd (11.08s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1558: (dbg) Run:  kubectl --context functional-20220921213235-10174 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1564: (dbg) Run:  kubectl --context functional-20220921213235-10174 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1569: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:342: "hello-node-connect-6458c8fb6f-wjd4z" [2d80bc96-06d7-45ad-b7d1-e03713bfce11] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
helpers_test.go:342: "hello-node-connect-6458c8fb6f-wjd4z" [2d80bc96-06d7-45ad-b7d1-e03713bfce11] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1569: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.077326981s
functional_test.go:1578: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220921213235-10174 service hello-node-connect --url
functional_test.go:1584: found endpoint for hello-node-connect: http://192.168.49.2:30647
functional_test.go:1604: http://192.168.49.2:30647: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-6458c8fb6f-wjd4z

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30647
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.89s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1619: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220921213235-10174 addons list
functional_test.go:1631: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220921213235-10174 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (36.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:342: "storage-provisioner" [23c97e69-553d-4f29-9f05-6bdec32bb821] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.008511706s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-20220921213235-10174 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-20220921213235-10174 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-20220921213235-10174 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20220921213235-10174 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [4e8deed3-b78e-43a6-95e7-28c0e41e6898] Pending
helpers_test.go:342: "sp-pod" [4e8deed3-b78e-43a6-95e7-28c0e41e6898] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [4e8deed3-b78e-43a6-95e7-28c0e41e6898] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 20.006184913s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-20220921213235-10174 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-20220921213235-10174 delete -f testdata/storage-provisioner/pod.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-20220921213235-10174 delete -f testdata/storage-provisioner/pod.yaml: (2.646334763s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20220921213235-10174 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [f5088546-0721-40fb-a096-d6d79588a354] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [f5088546-0721-40fb-a096-d6d79588a354] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [f5088546-0721-40fb-a096-d6d79588a354] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.006457208s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-20220921213235-10174 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (36.66s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220921213235-10174 ssh "echo hello"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1671: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220921213235-10174 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220921213235-10174 cp testdata/cp-test.txt /home/docker/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220921213235-10174 ssh -n functional-20220921213235-10174 "sudo cat /home/docker/cp-test.txt"

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220921213235-10174 cp functional-20220921213235-10174:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2907701416/001/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220921213235-10174 ssh -n functional-20220921213235-10174 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.49s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (29.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1719: (dbg) Run:  kubectl --context functional-20220921213235-10174 replace --force -f testdata/mysql.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1725: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:342: "mysql-596b7fcdbf-86bxq" [6d724130-a567-4342-8f4e-9613ec2a1042] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-596b7fcdbf-86bxq" [6d724130-a567-4342-8f4e-9613ec2a1042] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-596b7fcdbf-86bxq" [6d724130-a567-4342-8f4e-9613ec2a1042] Running

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1725: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 19.015991685s
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220921213235-10174 exec mysql-596b7fcdbf-86bxq -- mysql -ppassword -e "show databases;"
functional_test.go:1733: (dbg) Non-zero exit: kubectl --context functional-20220921213235-10174 exec mysql-596b7fcdbf-86bxq -- mysql -ppassword -e "show databases;": exit status 1 (297.055123ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220921213235-10174 exec mysql-596b7fcdbf-86bxq -- mysql -ppassword -e "show databases;"

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1733: (dbg) Non-zero exit: kubectl --context functional-20220921213235-10174 exec mysql-596b7fcdbf-86bxq -- mysql -ppassword -e "show databases;": exit status 1 (249.469804ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220921213235-10174 exec mysql-596b7fcdbf-86bxq -- mysql -ppassword -e "show databases;"
functional_test.go:1733: (dbg) Non-zero exit: kubectl --context functional-20220921213235-10174 exec mysql-596b7fcdbf-86bxq -- mysql -ppassword -e "show databases;": exit status 1 (311.792272ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220921213235-10174 exec mysql-596b7fcdbf-86bxq -- mysql -ppassword -e "show databases;"

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1733: (dbg) Non-zero exit: kubectl --context functional-20220921213235-10174 exec mysql-596b7fcdbf-86bxq -- mysql -ppassword -e "show databases;": exit status 1 (217.203217ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220921213235-10174 exec mysql-596b7fcdbf-86bxq -- mysql -ppassword -e "show databases;"
functional_test.go:1733: (dbg) Non-zero exit: kubectl --context functional-20220921213235-10174 exec mysql-596b7fcdbf-86bxq -- mysql -ppassword -e "show databases;": exit status 1 (315.89227ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220921213235-10174 exec mysql-596b7fcdbf-86bxq -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (29.68s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1855: Checking for existence of /etc/test/nested/copy/10174/hosts within VM
functional_test.go:1857: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220921213235-10174 ssh "sudo cat /etc/test/nested/copy/10174/hosts"

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1862: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1898: Checking for existence of /etc/ssl/certs/10174.pem within VM
functional_test.go:1899: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220921213235-10174 ssh "sudo cat /etc/ssl/certs/10174.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1898: Checking for existence of /usr/share/ca-certificates/10174.pem within VM
functional_test.go:1899: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220921213235-10174 ssh "sudo cat /usr/share/ca-certificates/10174.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1898: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1899: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220921213235-10174 ssh "sudo cat /etc/ssl/certs/51391683.0"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1925: Checking for existence of /etc/ssl/certs/101742.pem within VM
functional_test.go:1926: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220921213235-10174 ssh "sudo cat /etc/ssl/certs/101742.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1925: Checking for existence of /usr/share/ca-certificates/101742.pem within VM
functional_test.go:1926: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220921213235-10174 ssh "sudo cat /usr/share/ca-certificates/101742.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1925: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1926: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220921213235-10174 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.41s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:214: (dbg) Run:  kubectl --context functional-20220921213235-10174 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1953: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220921213235-10174 ssh "sudo systemctl is-active docker"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1953: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220921213235-10174 ssh "sudo systemctl is-active docker": exit status 1 (389.585421ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:1953: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220921213235-10174 ssh "sudo systemctl is-active crio"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1953: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220921213235-10174 ssh "sudo systemctl is-active crio": exit status 1 (384.376429ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2182: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220921213235-10174 version --short
--- PASS: TestFunctional/parallel/Version/short (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2196: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220921213235-10174 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220921213235-10174 image ls --format short
functional_test.go:261: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220921213235-10174 image ls --format short:
registry.k8s.io/pause:3.8
registry.k8s.io/kube-scheduler:v1.25.2
registry.k8s.io/kube-proxy:v1.25.2
registry.k8s.io/kube-controller-manager:v1.25.2
registry.k8s.io/kube-apiserver:v1.25.2
registry.k8s.io/etcd:3.5.4-0
registry.k8s.io/coredns/coredns:v1.9.3
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/echoserver:1.8
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-20220921213235-10174
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-20220921213235-10174
docker.io/kindest/kindnetd:v20220726-ed811e41
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220921213235-10174 image ls --format table

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220921213235-10174 image ls --format table:
|---------------------------------------------|---------------------------------|---------------|--------|
|                    Image                    |               Tag               |   Image ID    |  Size  |
|---------------------------------------------|---------------------------------|---------------|--------|
| registry.k8s.io/kube-scheduler              | v1.25.2                         | sha256:ca0ea1 | 15.8MB |
| docker.io/library/nginx                     | latest                          | sha256:2d389e | 56.8MB |
| gcr.io/google-containers/addon-resizer      | functional-20220921213235-10174 | sha256:ffd4cf | 10.8MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                              | sha256:6e38f4 | 9.06MB |
| k8s.gcr.io/pause                            | 3.1                             | sha256:da86e6 | 315kB  |
| registry.k8s.io/coredns/coredns             | v1.9.3                          | sha256:5185b9 | 14.8MB |
| registry.k8s.io/kube-apiserver              | v1.25.2                         | sha256:97801f | 34.2MB |
| docker.io/library/mysql                     | 5.7                             | sha256:daff57 | 128MB  |
| k8s.gcr.io/echoserver                       | 1.8                             | sha256:82e4c8 | 46.2MB |
| localhost/my-image                          | functional-20220921213235-10174 | sha256:d56df1 | 775kB  |
| registry.k8s.io/kube-proxy                  | v1.25.2                         | sha256:1c7d8c | 20.3MB |
| registry.k8s.io/pause                       | 3.8                             | sha256:487387 | 311kB  |
| docker.io/kindest/kindnetd                  | v20220726-ed811e41              | sha256:d921ce | 25.8MB |
| docker.io/library/minikube-local-cache-test | functional-20220921213235-10174 | sha256:074c20 | 1.74kB |
| docker.io/library/nginx                     | alpine                          | sha256:804f9c | 10.2MB |
| k8s.gcr.io/pause                            | 3.3                             | sha256:0184c1 | 298kB  |
| registry.k8s.io/etcd                        | 3.5.4-0                         | sha256:a8a176 | 102MB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc                    | sha256:56cc51 | 2.4MB  |
| k8s.gcr.io/pause                            | latest                          | sha256:350b16 | 72.3kB |
| registry.k8s.io/kube-controller-manager     | v1.25.2                         | sha256:dbfceb | 31.3MB |
|---------------------------------------------|---------------------------------|---------------|--------|
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220921213235-10174 image ls --format json

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220921213235-10174 image ls --format json:
[{"id":"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517","repoDigests":["registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d"],"repoTags":["registry.k8s.io/pause:3.8"],"size":"311286"},{"id":"sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"19746404"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:dbfceb93c69b6d85661fe46c3e50de9e927e4895ebba2892a1db116e69c81890","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:f961aee35fd2e9a5ee057365e56c5bf40a39bfef91f785f312e51891db41876b"],"repoTags":["registry.k8s.i
o/kube-controller-manager:v1.25.2"],"size":"31261507"},{"id":"sha256:1c7d8c51823b5eb08189d553d911097ec8a6a40fea40bb5bdea91842f30d2e86","repoDigests":["registry.k8s.io/kube-proxy@sha256:ddde7d23d168496d321ef9175a8bf964a54a982b026fb207c306d853cbbd4f77"],"repoTags":["registry.k8s.io/kube-proxy:v1.25.2"],"size":"20263406"},{"id":"sha256:d921cee8494827575ce8b9cc6cf7dae988b6378ce3f62217bf430467916529b9","repoDigests":["docker.io/kindest/kindnetd@sha256:e2d4d675dcf28a90102ad5219b75c5a0ee096c4321247dfae31dd1467611a9fb"],"repoTags":["docker.io/kindest/kindnetd:v20220726-ed811e41"],"size":"25818452"},{"id":"sha256:074c20d43ff50a26bd4245ba489f00bf7fb1b9e48edb241cc4dba46dac60374b","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-20220921213235-10174"],"size":"1737"},{"id":"sha256:2d389e545974d4a93ebdef09b650753a55f72d1ab4518d17a30c0e1b3e297444","repoDigests":["docker.io/library/nginx@sha256:0b970013351304af46f322da1263516b188318682b2ab1091862497591189ff1"],"repoTags":["docker.io/librar
y/nginx:latest"],"size":"56768208"},{"id":"sha256:ca0ea1ee3cfd3d1ced15a8e6f4a236a436c5733b20a0b2dbbfbfd59977e12959","repoDigests":["registry.k8s.io/kube-scheduler@sha256:ef2e24a920a7432aff5b435562301dde3beb528b0c7bbec58ddf0a9af64d5fce"],"repoTags":["registry.k8s.io/kube-scheduler:v1.25.2"],"size":"15796102"},{"id":"sha256:a8a176a5d5d698f9409dc246f81fa69d37d4a2f4132ba5e62e72a78476b27f66","repoDigests":["registry.k8s.io/etcd@sha256:6f72b851544986cb0921b53ea655ec04c36131248f16d4ad110cb3ca0c369dc1"],"repoTags":["registry.k8s.io/etcd:3.5.4-0"],"size":"102157811"},{"id":"sha256:804f9cebfdc58964d6b25527e53802a3527a9ee880e082dc5b19a3d5466c43b7","repoDigests":["docker.io/library/nginx@sha256:082f8c10bd47b6acc8ef15ae61ae45dd8fde0e9f389a8b5cb23c37408642bf5d"],"repoTags":["docker.io/library/nginx:alpine"],"size":"10224689"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"r
epoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.1"],"size":"315399"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["k8s.gcr.io/pause:latest"],"size":"72306"},{"id":"sha256:d56df15b50bacfd8b11cb00251ca42524cfc2dc238137e6c88ce40b0827d1875","repoDigests":[],"repoTags":["localhost/my-image:functional-20220921213235-10174"],"size":"775254"},{"id":"sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a","repoDigests":["registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a"],"repoTags":["registry.k8s.io/coredns/coredns:v1.9.3"],"size":"14837849"},{"id":"sha256:97801f83949087fbdcc09b1c84ddda0ed5d01f4aabd17787a7714eb2796082b3","repoDigests":["registry.k8s.io/kube-apiserver@sha256:86e7b79379dddf58d7b7189d02ca96cc7e07d18efa4eb42adcaa4cf94531b
96e"],"repoTags":["registry.k8s.io/kube-apiserver:v1.25.2"],"size":"34235609"},{"id":"sha256:daff57b7d2d1e009d0b271972f62dbf4de64b8cdb9cd646442aeda961e615f44","repoDigests":["docker.io/library/mysql@sha256:c1bda6ecdbc63d3b0d3a3a3ce195de3dd755c4a0658ed782a16a0682216b9a48"],"repoTags":["docker.io/library/mysql:5.7"],"size":"128325429"},{"id":"sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-20220921213235-10174"],"size":"10823156"},{"id":"sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["k8s.gcr.io/echoserver:1.8"],"size":"46237695"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.3"],"size":"297686"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220921213235-10174 image ls --format yaml
functional_test.go:261: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220921213235-10174 image ls --format yaml:
- id: sha256:804f9cebfdc58964d6b25527e53802a3527a9ee880e082dc5b19a3d5466c43b7
repoDigests:
- docker.io/library/nginx@sha256:082f8c10bd47b6acc8ef15ae61ae45dd8fde0e9f389a8b5cb23c37408642bf5d
repoTags:
- docker.io/library/nginx:alpine
size: "10224689"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:a8a176a5d5d698f9409dc246f81fa69d37d4a2f4132ba5e62e72a78476b27f66
repoDigests:
- registry.k8s.io/etcd@sha256:6f72b851544986cb0921b53ea655ec04c36131248f16d4ad110cb3ca0c369dc1
repoTags:
- registry.k8s.io/etcd:3.5.4-0
size: "102157811"
- id: sha256:97801f83949087fbdcc09b1c84ddda0ed5d01f4aabd17787a7714eb2796082b3
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:86e7b79379dddf58d7b7189d02ca96cc7e07d18efa4eb42adcaa4cf94531b96e
repoTags:
- registry.k8s.io/kube-apiserver:v1.25.2
size: "34235609"
- id: sha256:daff57b7d2d1e009d0b271972f62dbf4de64b8cdb9cd646442aeda961e615f44
repoDigests:
- docker.io/library/mysql@sha256:c1bda6ecdbc63d3b0d3a3a3ce195de3dd755c4a0658ed782a16a0682216b9a48
repoTags:
- docker.io/library/mysql:5.7
size: "128325429"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- k8s.gcr.io/echoserver:1.8
size: "46237695"
- id: sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517
repoDigests:
- registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d
repoTags:
- registry.k8s.io/pause:3.8
size: "311286"
- id: sha256:d921cee8494827575ce8b9cc6cf7dae988b6378ce3f62217bf430467916529b9
repoDigests:
- docker.io/kindest/kindnetd@sha256:e2d4d675dcf28a90102ad5219b75c5a0ee096c4321247dfae31dd1467611a9fb
repoTags:
- docker.io/kindest/kindnetd:v20220726-ed811e41
size: "25818452"
- id: sha256:074c20d43ff50a26bd4245ba489f00bf7fb1b9e48edb241cc4dba46dac60374b
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-20220921213235-10174
size: "1737"
- id: sha256:2d389e545974d4a93ebdef09b650753a55f72d1ab4518d17a30c0e1b3e297444
repoDigests:
- docker.io/library/nginx@sha256:0b970013351304af46f322da1263516b188318682b2ab1091862497591189ff1
repoTags:
- docker.io/library/nginx:latest
size: "56768208"
- id: sha256:1c7d8c51823b5eb08189d553d911097ec8a6a40fea40bb5bdea91842f30d2e86
repoDigests:
- registry.k8s.io/kube-proxy@sha256:ddde7d23d168496d321ef9175a8bf964a54a982b026fb207c306d853cbbd4f77
repoTags:
- registry.k8s.io/kube-proxy:v1.25.2
size: "20263406"
- id: sha256:ca0ea1ee3cfd3d1ced15a8e6f4a236a436c5733b20a0b2dbbfbfd59977e12959
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:ef2e24a920a7432aff5b435562301dde3beb528b0c7bbec58ddf0a9af64d5fce
repoTags:
- registry.k8s.io/kube-scheduler:v1.25.2
size: "15796102"
- id: sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-20220921213235-10174
size: "10823156"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "315399"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "297686"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "72306"
- id: sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a
repoTags:
- registry.k8s.io/coredns/coredns:v1.9.3
size: "14837849"
- id: sha256:dbfceb93c69b6d85661fe46c3e50de9e927e4895ebba2892a1db116e69c81890
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:f961aee35fd2e9a5ee057365e56c5bf40a39bfef91f785f312e51891db41876b
repoTags:
- registry.k8s.io/kube-controller-manager:v1.25.2
size: "31261507"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:303: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220921213235-10174 ssh pgrep buildkitd
functional_test.go:303: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220921213235-10174 ssh pgrep buildkitd: exit status 1 (324.317427ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220921213235-10174 image build -t localhost/my-image:functional-20220921213235-10174 testdata/build

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p functional-20220921213235-10174 image build -t localhost/my-image:functional-20220921213235-10174 testdata/build: (2.883399292s)
functional_test.go:318: (dbg) Stderr: out/minikube-linux-amd64 -p functional-20220921213235-10174 image build -t localhost/my-image:functional-20220921213235-10174 testdata/build:
#1 [internal] load .dockerignore
#1 transferring context: 2B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load build definition from Dockerfile
#2 transferring dockerfile: 97B done
#2 DONE 0.0s

                                                
                                                
#3 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#3 DONE 1.3s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.5s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.1s done
#8 exporting manifest sha256:8545ca4c57d490f558059fbd6f78cba18fade01f3a5e918f28adb29ac18a1915 0.0s done
#8 exporting config sha256:d56df15b50bacfd8b11cb00251ca42524cfc2dc238137e6c88ce40b0827d1875 0.0s done
#8 naming to localhost/my-image:functional-20220921213235-10174 done
#8 DONE 0.2s
functional_test.go:443: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220921213235-10174 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:337: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/Setup
functional_test.go:337: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.443154341s)
functional_test.go:342: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-20220921213235-10174
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.47s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2045: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220921213235-10174 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2045: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220921213235-10174 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2045: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220921213235-10174 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220921213235-10174 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220921213235-10174

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:350: (dbg) Done: out/minikube-linux-amd64 -p functional-20220921213235-10174 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220921213235-10174: (4.872082772s)
functional_test.go:443: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220921213235-10174 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.19s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:127: (dbg) daemon: [out/minikube-linux-amd64 -p functional-20220921213235-10174 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (19.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:147: (dbg) Run:  kubectl --context functional-20220921213235-10174 apply -f testdata/testsvc.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:342: "nginx-svc" [bd794892-e870-4f96-9721-d7f3d7be063e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
helpers_test.go:342: "nginx-svc" [bd794892-e870-4f96-9721-d7f3d7be063e] Running

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 19.006996063s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (19.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1305: (dbg) Run:  out/minikube-linux-amd64 profile list

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: Took "408.892225ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1319: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1324: Took "86.868632ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1356: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1361: Took "439.956397ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1369: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1374: Took "77.120606ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (5.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:360: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220921213235-10174 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220921213235-10174

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:360: (dbg) Done: out/minikube-linux-amd64 -p functional-20220921213235-10174 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220921213235-10174: (5.406153332s)
functional_test.go:443: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220921213235-10174 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (5.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:230: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:230: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.383248842s)
functional_test.go:235: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-20220921213235-10174
functional_test.go:240: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220921213235-10174 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220921213235-10174

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:240: (dbg) Done: out/minikube-linux-amd64 -p functional-20220921213235-10174 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220921213235-10174: (5.183047254s)
functional_test.go:443: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220921213235-10174 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:375: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220921213235-10174 image save gcr.io/google-containers/addon-resizer:functional-20220921213235-10174 /home/jenkins/workspace/Docker_Linux_containerd_integration/addon-resizer-save.tar

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:375: (dbg) Done: out/minikube-linux-amd64 -p functional-20220921213235-10174 image save gcr.io/google-containers/addon-resizer:functional-20220921213235-10174 /home/jenkins/workspace/Docker_Linux_containerd_integration/addon-resizer-save.tar: (1.833080496s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:387: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220921213235-10174 image rm gcr.io/google-containers/addon-resizer:functional-20220921213235-10174

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:443: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220921213235-10174 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:404: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220921213235-10174 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/addon-resizer-save.tar

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:404: (dbg) Done: out/minikube-linux-amd64 -p functional-20220921213235-10174 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/addon-resizer-save.tar: (2.109397202s)
functional_test.go:443: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220921213235-10174 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.35s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220921213235-10174 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:234: tunnel at http://10.109.110.76 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:369: (dbg) stopping [out/minikube-linux-amd64 -p functional-20220921213235-10174 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:414: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-20220921213235-10174
functional_test.go:419: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220921213235-10174 image save --daemon gcr.io/google-containers/addon-resizer:functional-20220921213235-10174
functional_test.go:419: (dbg) Done: out/minikube-linux-amd64 -p functional-20220921213235-10174 image save --daemon gcr.io/google-containers/addon-resizer:functional-20220921213235-10174: (1.459625545s)
functional_test.go:424: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-20220921213235-10174
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.51s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:66: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-20220921213235-10174 /tmp/TestFunctionalparallelMountCmdany-port317694222/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:100: wrote "test-1663796090260336202" to /tmp/TestFunctionalparallelMountCmdany-port317694222/001/created-by-test
functional_test_mount_test.go:100: wrote "test-1663796090260336202" to /tmp/TestFunctionalparallelMountCmdany-port317694222/001/created-by-test-removed-by-pod
functional_test_mount_test.go:100: wrote "test-1663796090260336202" to /tmp/TestFunctionalparallelMountCmdany-port317694222/001/test-1663796090260336202
functional_test_mount_test.go:108: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220921213235-10174 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220921213235-10174 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (371.106655ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:108: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220921213235-10174 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:122: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220921213235-10174 ssh -- ls -la /mount-9p
functional_test_mount_test.go:126: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 21 21:34 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 21 21:34 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 21 21:34 test-1663796090260336202
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220921213235-10174 ssh cat /mount-9p/test-1663796090260336202
functional_test_mount_test.go:141: (dbg) Run:  kubectl --context functional-20220921213235-10174 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:146: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:342: "busybox-mount" [7aeebe28-3892-49db-a41e-cfb02debc058] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [7aeebe28-3892-49db-a41e-cfb02debc058] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [7aeebe28-3892-49db-a41e-cfb02debc058] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [7aeebe28-3892-49db-a41e-cfb02debc058] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:146: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.005585955s
functional_test_mount_test.go:162: (dbg) Run:  kubectl --context functional-20220921213235-10174 logs busybox-mount
functional_test_mount_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220921213235-10174 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220921213235-10174 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:83: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220921213235-10174 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:87: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20220921213235-10174 /tmp/TestFunctionalparallelMountCmdany-port317694222/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.35s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:206: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-20220921213235-10174 /tmp/TestFunctionalparallelMountCmdspecific-port2047105340/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220921213235-10174 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220921213235-10174 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (476.739365ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220921213235-10174 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:250: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220921213235-10174 ssh -- ls -la /mount-9p

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:254: guest mount directory contents
total 0
functional_test_mount_test.go:256: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20220921213235-10174 /tmp/TestFunctionalparallelMountCmdspecific-port2047105340/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:257: reading mount text
functional_test_mount_test.go:271: done reading mount text
functional_test_mount_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220921213235-10174 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220921213235-10174 ssh "sudo umount -f /mount-9p": exit status 1 (322.60291ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:225: "out/minikube-linux-amd64 -p functional-20220921213235-10174 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:227: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20220921213235-10174 /tmp/TestFunctionalparallelMountCmdspecific-port2047105340/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
E0921 21:35:08.448376   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/addons-20220921212740-10174/client.crt: no such file or directory
E0921 21:35:08.454002   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/addons-20220921212740-10174/client.crt: no such file or directory
E0921 21:35:08.464270   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/addons-20220921212740-10174/client.crt: no such file or directory
E0921 21:35:08.484528   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/addons-20220921212740-10174/client.crt: no such file or directory
E0921 21:35:08.524814   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/addons-20220921212740-10174/client.crt: no such file or directory
E0921 21:35:08.605051   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/addons-20220921212740-10174/client.crt: no such file or directory
E0921 21:35:08.765408   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/addons-20220921212740-10174/client.crt: no such file or directory
2022/09/21 21:35:08 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.37s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:185: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:185: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-20220921213235-10174
--- PASS: TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:193: (dbg) Run:  docker rmi -f localhost/my-image:functional-20220921213235-10174
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:201: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-20220921213235-10174
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (71.59s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-20220921213512-10174 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E0921 21:35:13.568951   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/addons-20220921212740-10174/client.crt: no such file or directory
E0921 21:35:18.689535   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/addons-20220921212740-10174/client.crt: no such file or directory
E0921 21:35:28.930559   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/addons-20220921212740-10174/client.crt: no such file or directory
E0921 21:35:49.411377   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/addons-20220921212740-10174/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-20220921213512-10174 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (1m11.590805364s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (71.59s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (14.14s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220921213512-10174 addons enable ingress --alsologtostderr -v=5
E0921 21:36:30.372488   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/addons-20220921212740-10174/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-20220921213512-10174 addons enable ingress --alsologtostderr -v=5: (14.136435954s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (14.14s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.38s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220921213512-10174 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.38s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (30.54s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:164: (dbg) Run:  kubectl --context ingress-addon-legacy-20220921213512-10174 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:164: (dbg) Done: kubectl --context ingress-addon-legacy-20220921213512-10174 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (10.81486687s)
addons_test.go:184: (dbg) Run:  kubectl --context ingress-addon-legacy-20220921213512-10174 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:197: (dbg) Run:  kubectl --context ingress-addon-legacy-20220921213512-10174 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:202: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:342: "nginx" [38d269cb-f793-4f3b-bcc6-53cc7002916d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:342: "nginx" [38d269cb-f793-4f3b-bcc6-53cc7002916d] Running
addons_test.go:202: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 9.00548677s
addons_test.go:214: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220921213512-10174 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:238: (dbg) Run:  kubectl --context ingress-addon-legacy-20220921213512-10174 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220921213512-10174 ip
addons_test.go:249: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:258: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220921213512-10174 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:258: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-20220921213512-10174 addons disable ingress-dns --alsologtostderr -v=1: (2.242454199s)
addons_test.go:263: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220921213512-10174 addons disable ingress --alsologtostderr -v=1
addons_test.go:263: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-20220921213512-10174 addons disable ingress --alsologtostderr -v=1: (7.267757884s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (30.54s)

                                                
                                    
x
+
TestJSONOutput/start/Command (44.03s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-20220921213711-10174 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E0921 21:37:52.293227   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/addons-20220921212740-10174/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-20220921213711-10174 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (44.030221406s)
--- PASS: TestJSONOutput/start/Command (44.03s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.69s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-20220921213711-10174 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.69s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.61s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-20220921213711-10174 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.61s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.8s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-20220921213711-10174 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-20220921213711-10174 --output=json --user=testUser: (5.804794017s)
--- PASS: TestJSONOutput/stop/Command (5.80s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.27s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:149: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-20220921213807-10174 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-20220921213807-10174 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (85.650791ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4725d11a-f8b3-4f03-bc08-5c318659eda7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-20220921213807-10174] minikube v1.27.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f2bdea3f-84e2-43ae-bfe9-808387060f0e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=14995"}}
	{"specversion":"1.0","id":"3e2fce03-d544-4528-97dc-caf638d29c74","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"9b14c3e2-532b-48f2-864a-8547fe155974","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig"}}
	{"specversion":"1.0","id":"c734d654-b974-4a77-9a3d-916900444157","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube"}}
	{"specversion":"1.0","id":"f9d94b10-1877-45de-b054-7c33f4f4d72d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"8d10c9d9-4684-4f46-9705-937f98a1d98d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-20220921213807-10174" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-20220921213807-10174
--- PASS: TestErrorJSONOutput (0.27s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (36.49s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-20220921213808-10174 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-20220921213808-10174 --network=: (34.269782505s)
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-20220921213808-10174" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-20220921213808-10174
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-20220921213808-10174: (2.198455372s)
--- PASS: TestKicCustomNetwork/create_custom_network (36.49s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (28.79s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-20220921213844-10174 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-20220921213844-10174 --network=bridge: (26.802600661s)
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-20220921213844-10174" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-20220921213844-10174
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-20220921213844-10174: (1.967586328s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (28.79s)

                                                
                                    
x
+
TestKicExistingNetwork (29.65s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-20220921213913-10174 --network=existing-network
E0921 21:39:20.482186   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/functional-20220921213235-10174/client.crt: no such file or directory
E0921 21:39:20.487450   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/functional-20220921213235-10174/client.crt: no such file or directory
E0921 21:39:20.497718   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/functional-20220921213235-10174/client.crt: no such file or directory
E0921 21:39:20.518062   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/functional-20220921213235-10174/client.crt: no such file or directory
E0921 21:39:20.558389   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/functional-20220921213235-10174/client.crt: no such file or directory
E0921 21:39:20.638747   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/functional-20220921213235-10174/client.crt: no such file or directory
E0921 21:39:20.799211   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/functional-20220921213235-10174/client.crt: no such file or directory
E0921 21:39:21.119788   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/functional-20220921213235-10174/client.crt: no such file or directory
E0921 21:39:21.760092   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/functional-20220921213235-10174/client.crt: no such file or directory
E0921 21:39:23.040633   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/functional-20220921213235-10174/client.crt: no such file or directory
E0921 21:39:25.601752   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/functional-20220921213235-10174/client.crt: no such file or directory
E0921 21:39:30.722105   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/functional-20220921213235-10174/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-20220921213913-10174 --network=existing-network: (27.403883363s)
helpers_test.go:175: Cleaning up "existing-network-20220921213913-10174" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-20220921213913-10174
E0921 21:39:40.962923   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/functional-20220921213235-10174/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-20220921213913-10174: (2.084393414s)
--- PASS: TestKicExistingNetwork (29.65s)

                                                
                                    
x
+
TestKicCustomSubnet (29.93s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-20220921213943-10174 --subnet=192.168.60.0/24
E0921 21:40:01.443775   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/functional-20220921213235-10174/client.crt: no such file or directory
E0921 21:40:08.448082   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/addons-20220921212740-10174/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-20220921213943-10174 --subnet=192.168.60.0/24: (27.795398296s)
kic_custom_network_test.go:133: (dbg) Run:  docker network inspect custom-subnet-20220921213943-10174 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-20220921213943-10174" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-20220921213943-10174
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-20220921213943-10174: (2.110722168s)
--- PASS: TestKicCustomSubnet (29.93s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (52.45s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-20220921214013-10174 --driver=docker  --container-runtime=containerd
E0921 21:40:36.133590   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/addons-20220921212740-10174/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-20220921214013-10174 --driver=docker  --container-runtime=containerd: (23.150818176s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-20220921214013-10174 --driver=docker  --container-runtime=containerd
E0921 21:40:42.404093   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/functional-20220921213235-10174/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-20220921214013-10174 --driver=docker  --container-runtime=containerd: (23.892071875s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-20220921214013-10174
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-20220921214013-10174
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-20220921214013-10174" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-20220921214013-10174
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-20220921214013-10174: (1.966406876s)
helpers_test.go:175: Cleaning up "first-20220921214013-10174" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-20220921214013-10174
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-20220921214013-10174: (2.229025458s)
--- PASS: TestMinikubeProfile (52.45s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.2s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-20220921214105-10174 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-20220921214105-10174 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (4.194965555s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.20s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.32s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-20220921214105-10174 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.32s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.23s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-20220921214105-10174 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-20220921214105-10174 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (4.228226801s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.23s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.32s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-20220921214105-10174 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.32s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.68s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-20220921214105-10174 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-20220921214105-10174 --alsologtostderr -v=5: (1.677948455s)
--- PASS: TestMountStart/serial/DeleteFirst (1.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-20220921214105-10174 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.31s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.25s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-20220921214105-10174
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-20220921214105-10174: (1.251300862s)
--- PASS: TestMountStart/serial/Stop (1.25s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (6.72s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-20220921214105-10174
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-20220921214105-10174: (5.717027941s)
--- PASS: TestMountStart/serial/RestartStopped (6.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.32s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-20220921214105-10174 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.32s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (89.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220921214128-10174 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0921 21:41:38.505416   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/ingress-addon-legacy-20220921213512-10174/client.crt: no such file or directory
E0921 21:41:38.510655   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/ingress-addon-legacy-20220921213512-10174/client.crt: no such file or directory
E0921 21:41:38.521332   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/ingress-addon-legacy-20220921213512-10174/client.crt: no such file or directory
E0921 21:41:38.541585   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/ingress-addon-legacy-20220921213512-10174/client.crt: no such file or directory
E0921 21:41:38.581841   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/ingress-addon-legacy-20220921213512-10174/client.crt: no such file or directory
E0921 21:41:38.662936   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/ingress-addon-legacy-20220921213512-10174/client.crt: no such file or directory
E0921 21:41:38.823648   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/ingress-addon-legacy-20220921213512-10174/client.crt: no such file or directory
E0921 21:41:39.144196   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/ingress-addon-legacy-20220921213512-10174/client.crt: no such file or directory
E0921 21:41:39.785120   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/ingress-addon-legacy-20220921213512-10174/client.crt: no such file or directory
E0921 21:41:41.066053   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/ingress-addon-legacy-20220921213512-10174/client.crt: no such file or directory
E0921 21:41:43.626725   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/ingress-addon-legacy-20220921213512-10174/client.crt: no such file or directory
E0921 21:41:48.747860   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/ingress-addon-legacy-20220921213512-10174/client.crt: no such file or directory
E0921 21:41:58.988685   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/ingress-addon-legacy-20220921213512-10174/client.crt: no such file or directory
E0921 21:42:04.324647   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/functional-20220921213235-10174/client.crt: no such file or directory
E0921 21:42:19.469578   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/ingress-addon-legacy-20220921213512-10174/client.crt: no such file or directory
multinode_test.go:83: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20220921214128-10174 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m29.002519458s)
multinode_test.go:89: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220921214128-10174 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (89.54s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220921214128-10174 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220921214128-10174 -- rollout status deployment/busybox
E0921 21:43:00.431613   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/ingress-addon-legacy-20220921213512-10174/client.crt: no such file or directory
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-20220921214128-10174 -- rollout status deployment/busybox: (2.741314858s)
multinode_test.go:490: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220921214128-10174 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220921214128-10174 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:510: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220921214128-10174 -- exec busybox-65db55d5d6-hp96q -- nslookup kubernetes.io
multinode_test.go:510: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220921214128-10174 -- exec busybox-65db55d5d6-p44c2 -- nslookup kubernetes.io
multinode_test.go:520: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220921214128-10174 -- exec busybox-65db55d5d6-hp96q -- nslookup kubernetes.default
multinode_test.go:520: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220921214128-10174 -- exec busybox-65db55d5d6-p44c2 -- nslookup kubernetes.default
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220921214128-10174 -- exec busybox-65db55d5d6-hp96q -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220921214128-10174 -- exec busybox-65db55d5d6-p44c2 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.50s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:538: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220921214128-10174 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220921214128-10174 -- exec busybox-65db55d5d6-hp96q -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220921214128-10174 -- exec busybox-65db55d5d6-hp96q -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220921214128-10174 -- exec busybox-65db55d5d6-p44c2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220921214128-10174 -- exec busybox-65db55d5d6-p44c2 -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.88s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (41.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:108: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-20220921214128-10174 -v 3 --alsologtostderr
multinode_test.go:108: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-20220921214128-10174 -v 3 --alsologtostderr: (40.762147501s)
multinode_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220921214128-10174 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (41.47s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:130: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.35s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (11.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220921214128-10174 status --output json --alsologtostderr
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220921214128-10174 cp testdata/cp-test.txt multinode-20220921214128-10174:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220921214128-10174 ssh -n multinode-20220921214128-10174 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220921214128-10174 cp multinode-20220921214128-10174:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2956905242/001/cp-test_multinode-20220921214128-10174.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220921214128-10174 ssh -n multinode-20220921214128-10174 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220921214128-10174 cp multinode-20220921214128-10174:/home/docker/cp-test.txt multinode-20220921214128-10174-m02:/home/docker/cp-test_multinode-20220921214128-10174_multinode-20220921214128-10174-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220921214128-10174 ssh -n multinode-20220921214128-10174 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220921214128-10174 ssh -n multinode-20220921214128-10174-m02 "sudo cat /home/docker/cp-test_multinode-20220921214128-10174_multinode-20220921214128-10174-m02.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220921214128-10174 cp multinode-20220921214128-10174:/home/docker/cp-test.txt multinode-20220921214128-10174-m03:/home/docker/cp-test_multinode-20220921214128-10174_multinode-20220921214128-10174-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220921214128-10174 ssh -n multinode-20220921214128-10174 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220921214128-10174 ssh -n multinode-20220921214128-10174-m03 "sudo cat /home/docker/cp-test_multinode-20220921214128-10174_multinode-20220921214128-10174-m03.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220921214128-10174 cp testdata/cp-test.txt multinode-20220921214128-10174-m02:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220921214128-10174 ssh -n multinode-20220921214128-10174-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220921214128-10174 cp multinode-20220921214128-10174-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2956905242/001/cp-test_multinode-20220921214128-10174-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220921214128-10174 ssh -n multinode-20220921214128-10174-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220921214128-10174 cp multinode-20220921214128-10174-m02:/home/docker/cp-test.txt multinode-20220921214128-10174:/home/docker/cp-test_multinode-20220921214128-10174-m02_multinode-20220921214128-10174.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220921214128-10174 ssh -n multinode-20220921214128-10174-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220921214128-10174 ssh -n multinode-20220921214128-10174 "sudo cat /home/docker/cp-test_multinode-20220921214128-10174-m02_multinode-20220921214128-10174.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220921214128-10174 cp multinode-20220921214128-10174-m02:/home/docker/cp-test.txt multinode-20220921214128-10174-m03:/home/docker/cp-test_multinode-20220921214128-10174-m02_multinode-20220921214128-10174-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220921214128-10174 ssh -n multinode-20220921214128-10174-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220921214128-10174 ssh -n multinode-20220921214128-10174-m03 "sudo cat /home/docker/cp-test_multinode-20220921214128-10174-m02_multinode-20220921214128-10174-m03.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220921214128-10174 cp testdata/cp-test.txt multinode-20220921214128-10174-m03:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220921214128-10174 ssh -n multinode-20220921214128-10174-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220921214128-10174 cp multinode-20220921214128-10174-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2956905242/001/cp-test_multinode-20220921214128-10174-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220921214128-10174 ssh -n multinode-20220921214128-10174-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220921214128-10174 cp multinode-20220921214128-10174-m03:/home/docker/cp-test.txt multinode-20220921214128-10174:/home/docker/cp-test_multinode-20220921214128-10174-m03_multinode-20220921214128-10174.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220921214128-10174 ssh -n multinode-20220921214128-10174-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220921214128-10174 ssh -n multinode-20220921214128-10174 "sudo cat /home/docker/cp-test_multinode-20220921214128-10174-m03_multinode-20220921214128-10174.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220921214128-10174 cp multinode-20220921214128-10174-m03:/home/docker/cp-test.txt multinode-20220921214128-10174-m02:/home/docker/cp-test_multinode-20220921214128-10174-m03_multinode-20220921214128-10174-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220921214128-10174 ssh -n multinode-20220921214128-10174-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220921214128-10174 ssh -n multinode-20220921214128-10174-m02 "sudo cat /home/docker/cp-test_multinode-20220921214128-10174-m03_multinode-20220921214128-10174-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (11.37s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:208: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220921214128-10174 node stop m03
multinode_test.go:208: (dbg) Done: out/minikube-linux-amd64 -p multinode-20220921214128-10174 node stop m03: (1.243366326s)
multinode_test.go:214: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220921214128-10174 status
multinode_test.go:214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20220921214128-10174 status: exit status 7 (557.968977ms)

                                                
                                                
-- stdout --
	multinode-20220921214128-10174
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20220921214128-10174-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20220921214128-10174-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:221: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220921214128-10174 status --alsologtostderr
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20220921214128-10174 status --alsologtostderr: exit status 7 (547.305503ms)

                                                
                                                
-- stdout --
	multinode-20220921214128-10174
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20220921214128-10174-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20220921214128-10174-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0921 21:43:58.668353  100923 out.go:296] Setting OutFile to fd 1 ...
	I0921 21:43:58.668770  100923 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 21:43:58.668786  100923 out.go:309] Setting ErrFile to fd 2...
	I0921 21:43:58.668795  100923 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 21:43:58.669021  100923 root.go:333] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/bin
	I0921 21:43:58.669272  100923 out.go:303] Setting JSON to false
	I0921 21:43:58.669296  100923 mustload.go:65] Loading cluster: multinode-20220921214128-10174
	I0921 21:43:58.670141  100923 config.go:180] Loaded profile config "multinode-20220921214128-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.2
	I0921 21:43:58.670165  100923 status.go:253] checking status of multinode-20220921214128-10174 ...
	I0921 21:43:58.670567  100923 cli_runner.go:164] Run: docker container inspect multinode-20220921214128-10174 --format={{.State.Status}}
	I0921 21:43:58.693321  100923 status.go:328] multinode-20220921214128-10174 host status = "Running" (err=<nil>)
	I0921 21:43:58.693344  100923 host.go:66] Checking if "multinode-20220921214128-10174" exists ...
	I0921 21:43:58.693611  100923 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220921214128-10174
	I0921 21:43:58.717405  100923 host.go:66] Checking if "multinode-20220921214128-10174" exists ...
	I0921 21:43:58.717684  100923 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 21:43:58.717727  100923 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921214128-10174
	I0921 21:43:58.740530  100923 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49228 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/multinode-20220921214128-10174/id_rsa Username:docker}
	I0921 21:43:58.832353  100923 ssh_runner.go:195] Run: systemctl --version
	I0921 21:43:58.835901  100923 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0921 21:43:58.844550  100923 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 21:43:58.934095  100923 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2022-09-21 21:43:58.864474732 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1017-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.18 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0921 21:43:58.934614  100923 kubeconfig.go:92] found "multinode-20220921214128-10174" server: "https://192.168.58.2:8443"
	I0921 21:43:58.934637  100923 api_server.go:165] Checking apiserver status ...
	I0921 21:43:58.934664  100923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0921 21:43:58.943571  100923 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1193/cgroup
	I0921 21:43:58.950491  100923 api_server.go:181] apiserver freezer: "11:freezer:/docker/4be47b9a44270062791f27e0f667312d71bff733d698e2de94d82a438253a6fb/kubepods/burstable/podc8df5a204003791b330f9bfde1999342/16b60990330a5195c3b82788d3bcde007db64c08a8521acd325139d5c28d7be1"
	I0921 21:43:58.950544  100923 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/4be47b9a44270062791f27e0f667312d71bff733d698e2de94d82a438253a6fb/kubepods/burstable/podc8df5a204003791b330f9bfde1999342/16b60990330a5195c3b82788d3bcde007db64c08a8521acd325139d5c28d7be1/freezer.state
	I0921 21:43:58.956619  100923 api_server.go:203] freezer state: "THAWED"
	I0921 21:43:58.956645  100923 api_server.go:240] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0921 21:43:58.962142  100923 api_server.go:266] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0921 21:43:58.962169  100923 status.go:419] multinode-20220921214128-10174 apiserver status = Running (err=<nil>)
	I0921 21:43:58.962182  100923 status.go:255] multinode-20220921214128-10174 status: &{Name:multinode-20220921214128-10174 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0921 21:43:58.962213  100923 status.go:253] checking status of multinode-20220921214128-10174-m02 ...
	I0921 21:43:58.962438  100923 cli_runner.go:164] Run: docker container inspect multinode-20220921214128-10174-m02 --format={{.State.Status}}
	I0921 21:43:58.985449  100923 status.go:328] multinode-20220921214128-10174-m02 host status = "Running" (err=<nil>)
	I0921 21:43:58.985475  100923 host.go:66] Checking if "multinode-20220921214128-10174-m02" exists ...
	I0921 21:43:58.985729  100923 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220921214128-10174-m02
	I0921 21:43:59.008830  100923 host.go:66] Checking if "multinode-20220921214128-10174-m02" exists ...
	I0921 21:43:59.009100  100923 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0921 21:43:59.009141  100923 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220921214128-10174-m02
	I0921 21:43:59.032587  100923 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49233 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/multinode-20220921214128-10174-m02/id_rsa Username:docker}
	I0921 21:43:59.120165  100923 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0921 21:43:59.128936  100923 status.go:255] multinode-20220921214128-10174-m02 status: &{Name:multinode-20220921214128-10174-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0921 21:43:59.128967  100923 status.go:253] checking status of multinode-20220921214128-10174-m03 ...
	I0921 21:43:59.129204  100923 cli_runner.go:164] Run: docker container inspect multinode-20220921214128-10174-m03 --format={{.State.Status}}
	I0921 21:43:59.151974  100923 status.go:328] multinode-20220921214128-10174-m03 host status = "Stopped" (err=<nil>)
	I0921 21:43:59.152004  100923 status.go:341] host is not running, skipping remaining checks
	I0921 21:43:59.152010  100923 status.go:255] multinode-20220921214128-10174-m03 status: &{Name:multinode-20220921214128-10174-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.35s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (30.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:242: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:252: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220921214128-10174 node start m03 --alsologtostderr
E0921 21:44:20.482362   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/functional-20220921213235-10174/client.crt: no such file or directory
E0921 21:44:22.352180   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/ingress-addon-legacy-20220921213512-10174/client.crt: no such file or directory
multinode_test.go:252: (dbg) Done: out/minikube-linux-amd64 -p multinode-20220921214128-10174 node start m03 --alsologtostderr: (30.104924338s)
multinode_test.go:259: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220921214128-10174 status
multinode_test.go:273: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (30.89s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (155.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:281: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20220921214128-10174
multinode_test.go:288: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-20220921214128-10174
E0921 21:44:48.165505   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/functional-20220921213235-10174/client.crt: no such file or directory
E0921 21:45:08.448529   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/addons-20220921212740-10174/client.crt: no such file or directory
multinode_test.go:288: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-20220921214128-10174: (41.050213321s)
multinode_test.go:293: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220921214128-10174 --wait=true -v=8 --alsologtostderr
E0921 21:46:38.505072   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/ingress-addon-legacy-20220921213512-10174/client.crt: no such file or directory
multinode_test.go:293: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20220921214128-10174 --wait=true -v=8 --alsologtostderr: (1m54.620764345s)
multinode_test.go:298: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20220921214128-10174
--- PASS: TestMultiNode/serial/RestartKeepsNodes (155.80s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (4.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220921214128-10174 node delete m03
E0921 21:47:06.193033   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/ingress-addon-legacy-20220921213512-10174/client.crt: no such file or directory
multinode_test.go:392: (dbg) Done: out/minikube-linux-amd64 -p multinode-20220921214128-10174 node delete m03: (4.231205522s)
multinode_test.go:398: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220921214128-10174 status --alsologtostderr
multinode_test.go:412: (dbg) Run:  docker volume ls
multinode_test.go:422: (dbg) Run:  kubectl get nodes
multinode_test.go:430: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (4.89s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (39.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:312: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220921214128-10174 stop
multinode_test.go:312: (dbg) Done: out/minikube-linux-amd64 -p multinode-20220921214128-10174 stop: (39.76515164s)
multinode_test.go:318: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220921214128-10174 status
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20220921214128-10174 status: exit status 7 (111.254123ms)

                                                
                                                
-- stdout --
	multinode-20220921214128-10174
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20220921214128-10174-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220921214128-10174 status --alsologtostderr
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20220921214128-10174 status --alsologtostderr: exit status 7 (110.786386ms)

                                                
                                                
-- stdout --
	multinode-20220921214128-10174
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20220921214128-10174-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0921 21:47:50.680796  111738 out.go:296] Setting OutFile to fd 1 ...
	I0921 21:47:50.681190  111738 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 21:47:50.681234  111738 out.go:309] Setting ErrFile to fd 2...
	I0921 21:47:50.681252  111738 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 21:47:50.681494  111738 root.go:333] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/bin
	I0921 21:47:50.681755  111738 out.go:303] Setting JSON to false
	I0921 21:47:50.681798  111738 mustload.go:65] Loading cluster: multinode-20220921214128-10174
	I0921 21:47:50.682563  111738 config.go:180] Loaded profile config "multinode-20220921214128-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.25.2
	I0921 21:47:50.682585  111738 status.go:253] checking status of multinode-20220921214128-10174 ...
	I0921 21:47:50.682998  111738 cli_runner.go:164] Run: docker container inspect multinode-20220921214128-10174 --format={{.State.Status}}
	I0921 21:47:50.705212  111738 status.go:328] multinode-20220921214128-10174 host status = "Stopped" (err=<nil>)
	I0921 21:47:50.705235  111738 status.go:341] host is not running, skipping remaining checks
	I0921 21:47:50.705245  111738 status.go:255] multinode-20220921214128-10174 status: &{Name:multinode-20220921214128-10174 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0921 21:47:50.705298  111738 status.go:253] checking status of multinode-20220921214128-10174-m02 ...
	I0921 21:47:50.705564  111738 cli_runner.go:164] Run: docker container inspect multinode-20220921214128-10174-m02 --format={{.State.Status}}
	I0921 21:47:50.727245  111738 status.go:328] multinode-20220921214128-10174-m02 host status = "Stopped" (err=<nil>)
	I0921 21:47:50.727268  111738 status.go:341] host is not running, skipping remaining checks
	I0921 21:47:50.727276  111738 status.go:255] multinode-20220921214128-10174-m02 status: &{Name:multinode-20220921214128-10174-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (39.99s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (105.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:342: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:352: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220921214128-10174 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0921 21:49:20.482301   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/functional-20220921213235-10174/client.crt: no such file or directory
multinode_test.go:352: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20220921214128-10174 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m44.343075503s)
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220921214128-10174 status --alsologtostderr
multinode_test.go:372: (dbg) Run:  kubectl get nodes
multinode_test.go:380: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (105.02s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (25.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:441: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20220921214128-10174
multinode_test.go:450: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220921214128-10174-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:450: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-20220921214128-10174-m02 --driver=docker  --container-runtime=containerd: exit status 14 (86.020603ms)

                                                
                                                
-- stdout --
	* [multinode-20220921214128-10174-m02] minikube v1.27.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14995
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-20220921214128-10174-m02' is duplicated with machine name 'multinode-20220921214128-10174-m02' in profile 'multinode-20220921214128-10174'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:458: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220921214128-10174-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:458: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20220921214128-10174-m03 --driver=docker  --container-runtime=containerd: (22.841545805s)
multinode_test.go:465: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-20220921214128-10174
multinode_test.go:465: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-20220921214128-10174: exit status 80 (338.348573ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-20220921214128-10174
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-20220921214128-10174-m03 already exists in multinode-20220921214128-10174-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:470: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-20220921214128-10174-m03
multinode_test.go:470: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-20220921214128-10174-m03: (1.921417607s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (25.26s)

                                                
                                    
x
+
TestPreload (115.45s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:48: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-20220921215004-10174 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.17.0
E0921 21:50:08.448202   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/addons-20220921212740-10174/client.crt: no such file or directory
preload_test.go:48: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-20220921215004-10174 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.17.0: (1m8.76838102s)
preload_test.go:61: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-20220921215004-10174 -- sudo crictl pull gcr.io/k8s-minikube/busybox
preload_test.go:61: (dbg) Done: out/minikube-linux-amd64 ssh -p test-preload-20220921215004-10174 -- sudo crictl pull gcr.io/k8s-minikube/busybox: (2.017015858s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-20220921215004-10174 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd --kubernetes-version=v1.17.3
E0921 21:51:31.493858   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/addons-20220921212740-10174/client.crt: no such file or directory
E0921 21:51:38.505076   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/ingress-addon-legacy-20220921213512-10174/client.crt: no such file or directory
preload_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-20220921215004-10174 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd --kubernetes-version=v1.17.3: (41.880257521s)
preload_test.go:80: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-20220921215004-10174 -- sudo crictl image ls
helpers_test.go:175: Cleaning up "test-preload-20220921215004-10174" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-20220921215004-10174
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-20220921215004-10174: (2.437162639s)
--- PASS: TestPreload (115.45s)

                                                
                                    
x
+
TestScheduledStopUnix (99.39s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-20220921215200-10174 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-20220921215200-10174 --memory=2048 --driver=docker  --container-runtime=containerd: (22.777568514s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20220921215200-10174 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-20220921215200-10174 -n scheduled-stop-20220921215200-10174
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20220921215200-10174 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20220921215200-10174 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20220921215200-10174 -n scheduled-stop-20220921215200-10174
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-20220921215200-10174
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20220921215200-10174 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-20220921215200-10174
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-20220921215200-10174: exit status 7 (89.40465ms)

                                                
                                                
-- stdout --
	scheduled-stop-20220921215200-10174
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20220921215200-10174 -n scheduled-stop-20220921215200-10174
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20220921215200-10174 -n scheduled-stop-20220921215200-10174: exit status 7 (87.786666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-20220921215200-10174" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-20220921215200-10174
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-20220921215200-10174: (4.920787504s)
--- PASS: TestScheduledStopUnix (99.39s)

                                                
                                    
x
+
TestInsufficientStorage (15.82s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-20220921215339-10174 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-20220921215339-10174 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (9.249364147s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"16887d03-d3de-4091-8555-0888f69618b5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-20220921215339-10174] minikube v1.27.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"1bd06860-a2f9-416a-b8f7-21eaa247bed1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=14995"}}
	{"specversion":"1.0","id":"218daf72-a88a-4c84-8433-bdf8f0e6b503","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"66eaf2a5-caa6-454f-9182-e9ef6257213f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig"}}
	{"specversion":"1.0","id":"70b1a0a4-fbeb-40df-93cb-68d71e7407fc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube"}}
	{"specversion":"1.0","id":"79de528a-ca8b-448d-b75f-e91bf06b1a5b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"6b379318-0ad8-4920-a349-41d6382d3b1b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"3ad545b5-6c9b-4fda-9da4-a132e6d7f868","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"a557f127-cbad-4ce3-b6f8-0f4d4bf0b296","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"725128eb-3f60-4912-8867-77fc9011e2ba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"6f427242-dc38-48a2-a934-9e213a19d411","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-20220921215339-10174 in cluster insufficient-storage-20220921215339-10174","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"65bec275-bf74-4fbe-977d-7f9f1350553a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"9ee07fa3-c19c-4a5c-8bf1-b0afb4402ab6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"d8e261ed-44e5-4b56-8c5a-93737fbe2715","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-20220921215339-10174 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-20220921215339-10174 --output=json --layout=cluster: exit status 7 (327.31069ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20220921215339-10174","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.27.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20220921215339-10174","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 21:53:49.346233  132326 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20220921215339-10174" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-20220921215339-10174 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-20220921215339-10174 --output=json --layout=cluster: exit status 7 (321.460669ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20220921215339-10174","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.27.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20220921215339-10174","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0921 21:53:49.668894  132435 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20220921215339-10174" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
	E0921 21:53:49.676896  132435 status.go:557] unable to read event log: stat: stat /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/insufficient-storage-20220921215339-10174/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-20220921215339-10174" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-20220921215339-10174
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-20220921215339-10174: (5.922770895s)
--- PASS: TestInsufficientStorage (15.82s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (75.22s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Run:  /tmp/minikube-v1.16.0.2767019306.exe start -p running-upgrade-20220921215639-10174 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:127: (dbg) Done: /tmp/minikube-v1.16.0.2767019306.exe start -p running-upgrade-20220921215639-10174 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (38.093076172s)
version_upgrade_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-20220921215639-10174 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-20220921215639-10174 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (33.440615611s)
helpers_test.go:175: Cleaning up "running-upgrade-20220921215639-10174" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-20220921215639-10174
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-20220921215639-10174: (2.873842385s)
--- PASS: TestRunningBinaryUpgrade (75.22s)

                                                
                                    
x
+
TestMissingContainerUpgrade (143.55s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Run:  /tmp/minikube-v1.9.1.1770641519.exe start -p missing-upgrade-20220921215458-10174 --memory=2200 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Done: /tmp/minikube-v1.9.1.1770641519.exe start -p missing-upgrade-20220921215458-10174 --memory=2200 --driver=docker  --container-runtime=containerd: (1m19.738427522s)
version_upgrade_test.go:325: (dbg) Run:  docker stop missing-upgrade-20220921215458-10174
version_upgrade_test.go:325: (dbg) Done: docker stop missing-upgrade-20220921215458-10174: (12.319816542s)
version_upgrade_test.go:330: (dbg) Run:  docker rm missing-upgrade-20220921215458-10174
version_upgrade_test.go:336: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-20220921215458-10174 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:336: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-20220921215458-10174 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (48.687421439s)
helpers_test.go:175: Cleaning up "missing-upgrade-20220921215458-10174" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-20220921215458-10174
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-20220921215458-10174: (2.305082286s)
--- PASS: TestMissingContainerUpgrade (143.55s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.47s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220921215355-10174 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-20220921215355-10174 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (119.011222ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-20220921215355-10174] minikube v1.27.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14995
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (45.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220921215355-10174 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-20220921215355-10174 --driver=docker  --container-runtime=containerd: (44.849497389s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-20220921215355-10174 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (45.38s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (119.4s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Run:  /tmp/minikube-v1.16.0.2963599947.exe start -p stopped-upgrade-20220921215355-10174 --memory=2200 --vm-driver=docker  --container-runtime=containerd
E0921 21:54:20.482278   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/functional-20220921213235-10174/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Done: /tmp/minikube-v1.16.0.2963599947.exe start -p stopped-upgrade-20220921215355-10174 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (52.393928377s)
version_upgrade_test.go:199: (dbg) Run:  /tmp/minikube-v1.16.0.2963599947.exe -p stopped-upgrade-20220921215355-10174 stop
version_upgrade_test.go:199: (dbg) Done: /tmp/minikube-v1.16.0.2963599947.exe -p stopped-upgrade-20220921215355-10174 stop: (1.272275033s)
version_upgrade_test.go:205: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-20220921215355-10174 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:205: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-20220921215355-10174 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m5.734484958s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (119.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.68s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220921215355-10174 --no-kubernetes --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-20220921215355-10174 --no-kubernetes --driver=docker  --container-runtime=containerd: (15.187219112s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-20220921215355-10174 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-20220921215355-10174 status -o json: exit status 2 (422.336642ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-20220921215355-10174","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-20220921215355-10174

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-20220921215355-10174: (2.065710836s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.68s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.69s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220921215355-10174 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-20220921215355-10174 --no-kubernetes --driver=docker  --container-runtime=containerd: (6.69383151s)
--- PASS: TestNoKubernetes/serial/Start (6.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-20220921215355-10174 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-20220921215355-10174 "sudo systemctl is-active --quiet service kubelet": exit status 1 (367.150961ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-20220921215355-10174
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-20220921215355-10174: (1.297858594s)
--- PASS: TestNoKubernetes/serial/Stop (1.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.59s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220921215355-10174 --driver=docker  --container-runtime=containerd
E0921 21:55:08.448182   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/addons-20220921212740-10174/client.crt: no such file or directory
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-20220921215355-10174 --driver=docker  --container-runtime=containerd: (6.593804379s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.59s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-20220921215355-10174 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-20220921215355-10174 "sudo systemctl is-active --quiet service kubelet": exit status 1 (378.759987ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (0.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:220: (dbg) Run:  out/minikube-linux-amd64 start -p false-20220921215523-10174 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:220: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-20220921215523-10174 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (289.305453ms)

                                                
                                                
-- stdout --
	* [false-20220921215523-10174] minikube v1.27.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14995
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0921 21:55:23.684274  152511 out.go:296] Setting OutFile to fd 1 ...
	I0921 21:55:23.684398  152511 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 21:55:23.684410  152511 out.go:309] Setting ErrFile to fd 2...
	I0921 21:55:23.684418  152511 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0921 21:55:23.684545  152511 root.go:333] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/bin
	I0921 21:55:23.685135  152511 out.go:303] Setting JSON to false
	I0921 21:55:23.686307  152511 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":2275,"bootTime":1663795049,"procs":470,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1017-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0921 21:55:23.686376  152511 start.go:125] virtualization: kvm guest
	I0921 21:55:23.693243  152511 out.go:177] * [false-20220921215523-10174] minikube v1.27.0 on Ubuntu 20.04 (kvm/amd64)
	I0921 21:55:23.694937  152511 out.go:177]   - MINIKUBE_LOCATION=14995
	I0921 21:55:23.694891  152511 notify.go:214] Checking for updates...
	I0921 21:55:23.697945  152511 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0921 21:55:23.699439  152511 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
	I0921 21:55:23.700766  152511 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube
	I0921 21:55:23.702199  152511 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0921 21:55:23.705532  152511 config.go:180] Loaded profile config "kubernetes-upgrade-20220921215522-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0921 21:55:23.705695  152511 config.go:180] Loaded profile config "missing-upgrade-20220921215458-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.18.0
	I0921 21:55:23.705864  152511 config.go:180] Loaded profile config "stopped-upgrade-20220921215355-10174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0921 21:55:23.705933  152511 driver.go:365] Setting default libvirt URI to qemu:///system
	I0921 21:55:23.745636  152511 docker.go:137] docker version: linux-20.10.18
	I0921 21:55:23.745760  152511 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0921 21:55:23.883773  152511 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:true NGoroutines:79 SystemTime:2022-09-21 21:55:23.788808147 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1017-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:20.10.18 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.10.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0921 21:55:23.883926  152511 docker.go:254] overlay module found
	I0921 21:55:23.886467  152511 out.go:177] * Using the docker driver based on user configuration
	I0921 21:55:23.887834  152511 start.go:284] selected driver: docker
	I0921 21:55:23.887857  152511 start.go:808] validating driver "docker" against <nil>
	I0921 21:55:23.887882  152511 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0921 21:55:23.890108  152511 out.go:177] 
	W0921 21:55:23.891604  152511 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0921 21:55:23.893034  152511 out.go:177] 

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "false-20220921215523-10174" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-20220921215523-10174
--- PASS: TestNetworkPlugins/group/false (0.51s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.92s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:213: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-20220921215355-10174
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.92s)

                                                
                                    
x
+
TestPause/serial/Start (57.98s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-20220921215721-10174 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-20220921215721-10174 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (57.975938293s)
--- PASS: TestPause/serial/Start (57.98s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (16.17s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-20220921215721-10174 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-20220921215721-10174 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (16.15580459s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (16.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (58.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p auto-20220921215523-10174 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p auto-20220921215523-10174 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker  --container-runtime=containerd: (58.280698412s)
--- PASS: TestNetworkPlugins/group/auto/Start (58.28s)

                                                
                                    
x
+
TestPause/serial/Pause (0.91s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-20220921215721-10174 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.91s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.39s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-20220921215721-10174 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-20220921215721-10174 --output=json --layout=cluster: exit status 2 (388.399266ms)

                                                
                                                
-- stdout --
	{"Name":"pause-20220921215721-10174","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 8 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.27.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-20220921215721-10174","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.39s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.65s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-20220921215721-10174 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.65s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.93s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-20220921215721-10174 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.93s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.54s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-20220921215721-10174 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-20220921215721-10174 --alsologtostderr -v=5: (2.539468184s)
--- PASS: TestPause/serial/DeletePaused (2.54s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (14.05s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (13.965709133s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-20220921215721-10174
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-20220921215721-10174: exit status 1 (23.899423ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such volume: pause-20220921215721-10174

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (14.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (46.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-20220921215523-10174 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-20220921215523-10174 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker  --container-runtime=containerd: (46.019869815s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (46.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (106.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p cilium-20220921215524-10174 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker  --container-runtime=containerd
E0921 21:59:20.481436   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/functional-20220921213235-10174/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p cilium-20220921215524-10174 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker  --container-runtime=containerd: (1m46.066514026s)
--- PASS: TestNetworkPlugins/group/cilium/Start (106.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-20220921215523-10174 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context auto-20220921215523-10174 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-fvw4b" [c4f64d24-c7a0-4192-bc3c-4777d4d5041d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-5788d667bd-fvw4b" [c4f64d24-c7a0-4192-bc3c-4777d4d5041d] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.005383373s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Run:  kubectl --context auto-20220921215523-10174 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:188: (dbg) Run:  kubectl --context auto-20220921215523-10174 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:238: (dbg) Run:  kubectl --context auto-20220921215523-10174 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:342: "kindnet-6dz2z" [414a4d2f-6076-47ef-831b-bf3aa5ecb24f] Running
net_test.go:109: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.013932036s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-20220921215523-10174 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context kindnet-20220921215523-10174 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-fds5q" [39153b7b-8b2c-4253-8523-bafc93253fb8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-5788d667bd-fds5q" [39153b7b-8b2c-4253-8523-bafc93253fb8] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.006369128s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kindnet-20220921215523-10174 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:188: (dbg) Run:  kubectl --context kindnet-20220921215523-10174 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:238: (dbg) Run:  kubectl --context kindnet-20220921215523-10174 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (299.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-20220921215523-10174 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker  --container-runtime=containerd
E0921 22:00:08.447668   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/addons-20220921212740-10174/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-20220921215523-10174 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (4m59.022120861s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (299.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: waiting 10m0s for pods matching "k8s-app=cilium" in namespace "kube-system" ...
helpers_test.go:342: "cilium-p9dvz" [0cd4373d-3afd-4e99-bd2f-f628188f5162] Running
net_test.go:109: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: k8s-app=cilium healthy within 5.013410201s
--- PASS: TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p cilium-20220921215524-10174 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/cilium/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/NetCatPod (9.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context cilium-20220921215524-10174 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-r2smr" [df4f0263-f00a-4319-847e-5f80273d4200] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-5788d667bd-r2smr" [df4f0263-f00a-4319-847e-5f80273d4200] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: app=netcat healthy within 9.005765474s
--- PASS: TestNetworkPlugins/group/cilium/NetCatPod (9.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/DNS
net_test.go:169: (dbg) Run:  kubectl --context cilium-20220921215524-10174 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/cilium/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Localhost
net_test.go:188: (dbg) Run:  kubectl --context cilium-20220921215524-10174 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/cilium/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/HairPin
net_test.go:238: (dbg) Run:  kubectl --context cilium-20220921215524-10174 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/cilium/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (38.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-20220921215523-10174 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker  --container-runtime=containerd
E0921 22:01:38.505544   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/ingress-addon-legacy-20220921213512-10174/client.crt: no such file or directory
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p bridge-20220921215523-10174 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker  --container-runtime=containerd: (38.359644017s)
--- PASS: TestNetworkPlugins/group/bridge/Start (38.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-20220921215523-10174 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context bridge-20220921215523-10174 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-kwzwl" [e0c9cb0a-eda9-4186-ae44-79c000d2c5ca] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-5788d667bd-kwzwl" [e0c9cb0a-eda9-4186-ae44-79c000d2c5ca] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.005655s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-20220921215523-10174 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context enable-default-cni-20220921215523-10174 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-4vjsz" [0079ebcc-47f3-4b1f-ba32-699cadcc18e0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/NetCatPod
helpers_test.go:342: "netcat-5788d667bd-4vjsz" [0079ebcc-47f3-4b1f-ba32-699cadcc18e0] Running
E0921 22:05:07.970752   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/auto-20220921215523-10174/client.crt: no such file or directory
E0921 22:05:08.448270   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/addons-20220921212740-10174/client.crt: no such file or directory
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.005854157s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (118.85s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-20220921220722-10174 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0
E0921 22:07:24.071063   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/cilium-20220921215524-10174/client.crt: no such file or directory
E0921 22:07:25.493452   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/kindnet-20220921215523-10174/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-20220921220722-10174 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0: (1m58.85163462s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (118.85s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-20220921220722-10174 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [4f91b940-7e7a-4c8c-b078-81dd7bc2905d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [4f91b940-7e7a-4c8c-b078-81dd7bc2905d] Running
E0921 22:09:27.009836   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/auto-20220921215523-10174/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.011869547s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-20220921220722-10174 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.61s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-20220921220722-10174 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-20220921220722-10174 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.61s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (20.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-20220921220722-10174 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-20220921220722-10174 --alsologtostderr -v=3: (20.076047999s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (20.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220921220722-10174 -n old-k8s-version-20220921220722-10174
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220921220722-10174 -n old-k8s-version-20220921220722-10174: exit status 7 (93.434981ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-20220921220722-10174 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (433.54s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-20220921220722-10174 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0
E0921 22:09:54.693194   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/auto-20220921215523-10174/client.crt: no such file or directory
E0921 22:10:08.447816   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/addons-20220921212740-10174/client.crt: no such file or directory
E0921 22:10:09.333843   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/kindnet-20220921215523-10174/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-20220921220722-10174 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0: (7m13.15371519s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220921220722-10174 -n old-k8s-version-20220921220722-10174
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (433.54s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-6d946b7fb4-wmnww" [7f1dd10a-7e27-4101-adf7-c6aee82bb325] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.01165717s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-6d946b7fb4-wmnww" [7f1dd10a-7e27-4101-adf7-c6aee82bb325] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006069329s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-20220921220722-10174 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-20220921220722-10174 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20220726-ed811e41
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-20220921220722-10174 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20220921220722-10174 -n old-k8s-version-20220921220722-10174
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20220921220722-10174 -n old-k8s-version-20220921220722-10174: exit status 2 (380.756394ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-20220921220722-10174 -n old-k8s-version-20220921220722-10174
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-20220921220722-10174 -n old-k8s-version-20220921220722-10174: exit status 2 (380.464079ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-20220921220722-10174 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20220921220722-10174 -n old-k8s-version-20220921220722-10174
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-20220921220722-10174 -n old-k8s-version-20220921220722-10174
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (35.92s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-20220921221720-10174 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.2

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-20220921221720-10174 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.2: (35.921255055s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (35.92s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.72s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-20220921220439-10174 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-20220921220439-10174 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.72s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (4.72s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-20220921220439-10174 --alsologtostderr -v=3
E0921 22:17:26.934307   10174 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-14995-3622-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/bridge-20220921215523-10174/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-20220921220439-10174 --alsologtostderr -v=3: (4.718640092s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (4.72s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220921220439-10174 -n embed-certs-20220921220439-10174
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220921220439-10174 -n embed-certs-20220921220439-10174: exit status 7 (103.555332ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-20220921220439-10174 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.58s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-20220921221720-10174 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.58s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-20220921221720-10174 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-20220921221720-10174 --alsologtostderr -v=3: (1.286422958s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20220921221720-10174 -n newest-cni-20220921221720-10174
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20220921221720-10174 -n newest-cni-20220921221720-10174: exit status 7 (101.23206ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-20220921221720-10174 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (29.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-20220921221720-10174 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-20220921221720-10174 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.25.2: (28.802595587s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20220921221720-10174 -n newest-cni-20220921221720-10174
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (29.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-20220921221720-10174 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20220726-ed811e41
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.36s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.92s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-20220921221720-10174 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20220921221720-10174 -n newest-cni-20220921221720-10174
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20220921221720-10174 -n newest-cni-20220921221720-10174: exit status 2 (379.111591ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20220921221720-10174 -n newest-cni-20220921221720-10174
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20220921221720-10174 -n newest-cni-20220921221720-10174: exit status 2 (372.814204ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-20220921221720-10174 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20220921221720-10174 -n newest-cni-20220921221720-10174
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20220921221720-10174 -n newest-cni-20220921221720-10174
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.92s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.63s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-20220921220832-10174 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-20220921220832-10174 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.63s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-20220921220832-10174 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-20220921220832-10174 --alsologtostderr -v=3: (1.27127583s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (1.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220921220832-10174 -n no-preload-20220921220832-10174
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220921220832-10174 -n no-preload-20220921220832-10174: exit status 7 (90.84697ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-20220921220832-10174 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.65s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-different-port-20220921221118-10174 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-different-port-20220921221118-10174 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.65s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Stop (1.77s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-different-port-20220921221118-10174 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-different-port-20220921221118-10174 --alsologtostderr -v=3: (1.765857267s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Stop (1.77s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220921221118-10174 -n default-k8s-different-port-20220921221118-10174
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220921221118-10174 -n default-k8s-different-port-20220921221118-10174: exit status 7 (93.898512ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-different-port-20220921221118-10174 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    

Test skip (23/266)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:156: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.2/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.25.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.2/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.25.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.2/kubectl
aaa_download_only_test.go:156: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.25.2/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:450: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:35: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:455: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:542: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Only test none driver.
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:91: Skipping the test as containerd container runtimes requires CNI
helpers_test.go:175: Cleaning up "kubenet-20220921215523-10174" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-20220921215523-10174
--- SKIP: TestNetworkPlugins/group/kubenet (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel
net_test.go:79: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "flannel-20220921215523-10174" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p flannel-20220921215523-10174
--- SKIP: TestNetworkPlugins/group/flannel (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel
net_test.go:79: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "custom-flannel-20220921215524-10174" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-flannel-20220921215524-10174
--- SKIP: TestNetworkPlugins/group/custom-flannel (0.28s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-20220921220831-10174" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-20220921220831-10174
--- SKIP: TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                    
Copied to clipboard